Abstract
We address the problem of argument detection by investigating discourse and communicative text structure. A formal graph-based structure called communicative discourse tree (CDT) is used. It consists of a discourse tree (DT) with additional labels on edges, which stand for verbs. These verbs represent communicative actions. Discourse trees are based on rhetoric relations, extracted from a text according to Rhetoric Structure Theory. The problem is tackled as a binary classification task, where the positive class corresponds to texts with arguments and the negative class corresponds to texts with no argumentation. The feature engineering for the classification task is conducted, deciding which discourse and communicative features are better associated with argumentation. New Intense Argumentation dataset is built and described. Mixed dataset including different types of argumentation and different text genres is collected. Evaluation on this mixed dataset is provided.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
When an author attempts to provide an argument for something, a number of argumentation patterns can be employed. An argument is the key point of any persuasive essay or speech. The target of this paper is to recognize discourse features of text where an author not just shares her point of view but also provides a reason for it and attempts to prove it as well. To systematically extract argumentation patterns, we compile the Intense Argumentation Dataset where authors attempt to back up their complaints with sound argumentation.
Naturally, the text units considered in discourse analysis correspond to argument components, and discourse relations are closely related to argumentative relations. However, the traditional training dataset for rhetoric parsing consists of newspaper articles which do not necessarily involve heavy argumentation, and only relations between adjacent text units are identified. It is still an open question how the proposed discourse relations relate to argumentative relations [2].
To represent the linguistic features of text, we use the following sources: 1) Rhetoric relations between the parts of the sentences, obtained as a discourse tree [21]; 2) Speech acts, communicative actions, obtained as verbs from the VerbNet resource (the verb signatures with instantiated semantic roles). These are attached to rhetoric relations.
The final goal of this ongoing research is to estimate the contribution of each feature type to the problem of argument identification in text fragments.
The main contribution of our work at the current step is the following:
-
1.
We apply the notion of Communicative Discourse Tree (CDT) for the specific text classification task
-
2.
We develop a part of a text classification framework that includes automatic CDT extraction from text paragraphs, tree kernel learning on CDT, kNN learning on CDT based on computing similarity between CDTs.
-
3.
We apply our framework for the binary classification task on the dataset consisting of mixed texts with different types of argumentation and compare the performance of a few learning methods in combination with different features.
-
4.
We built and published new language resourse - Intense Argumentation Dataset containing different patterns of valid and invalid argumentation.
2 Related Work
Most previous work in automated discourse analysis is based on the extracting patterns from the corpora annotated with discourse relations, most notably the Penn Discourse Treebank (PDTB) [32] and the Rhetorical Structure Theory (RST) Discourse Treebank [3]. An extensive corpus of studies has been devoted to RST parsers, but the research on how to leverage RST parsing results for practical NLP problems is rather limited.
It is well known that argumentation and discourse structure of text are strongly related with each other. In [1] authors claim that performing an RST analysis essentially subsumes the task of determining argumentation structure. As it was shown [29] recently RST analysis can in principle support an argumentation analysis. Also an annotation of a discourse structure [41] is a field that is closely related to the annotation of argumentation structures [15].
There are different approaches to argument mining. The basic argument model can be represented with a scheme - a set of statements which contains three elements: a conclusion, a set of premises, and an inference from the premises to the conclusion [38]. Other models were offered in [37] and [6]. In general, all these models refer to an argument as a conclusion (or a claim) and a set of premises (or reasons). Text fragments can be classified into argumentation schemes - templates for typical arguments. So argument mining can consist of the following steps: identifying argumentative segments in text [19, 20, 36], clustering and classifying arguments [24], determining argument structure [10, 17], getting predefined argument schemas [4]. Recent works in argumentation mining study different features related to discourse, considering arguments which support claims [9, 11, 30], the relationship between argumentation structure and discourse structure (in terms of Rhetorical Structure Theory) is also the focus of contemporary research [31].
Previously, annotation schemes and approaches for identifying arguments in different domains have been developed, including [27] for legal documents, [40] for newspapers and court cases, [5] for policy modelling, and [33] for persuasive essays.
The concept of automatically identifying argumentation schemes was first discussed in [39] and [4]. Most of the approaches focus on the identification and classification of argument components. In [10] authors investigate argumentation discourse structure of the specific type of communication - online interaction threads. In [16] three types of argument structure identification are combined: linguistic features, topic changes and machine learning.
3 Communicative Discourse Tree
3.1 Case Study
We consider a controversial article published in Wall Street Journal about TheranosFootnote 1, a company providing healthcare services, and the company rebuttal.
RST represents texts by labeled hierarchical structures, called Discourse Trees (DTs). The leaves of a DT correspond to contiguous atomic text spans, Elementary Discourse Units (EDUs). EDUs are clause-like units that serve as building blocks. RST relations connect adjacent EDUs to form next-level discourse units represented by internal nodes. These nodes are in turn subjects to linking by RST relations. Discourse units linked by a rhetorical relation are further distinguished based on their relative importance in the text: nuclei are the core parts of the relation and satellites are peripheral or supportive ones.
We build an RST representation of the arguments and observe if a DT is capable of indicating whether a paragraph communicates both a claim and an argumentation that backs it up. We will then explore what needs to be added to a DT so that it is possible to judge if it expresses an argumentation pattern or not.
DTs and their images in this case study are obtained by the software of [35]. This is what happened according to CarreyrouFootnote 2:
"Since October [2015], the Wall Street Journal has published a series of anonymously sourced accusations that inaccurately portray Theranos. Now, in its latest story (“U.S. Probes Theranos Complaints,” Dec. 20), the Journal once again is relying on anonymous sources, this time reporting two undisclosed and unconfirmed complaints that allegedly were filed with the Centers for Medicare and Medicaid Services (CMS) and U.S. Food and Drug Administration (FDA)."
We as a readers understand that Theranos attempts to rebuke the claim of WSJ. But Fig. 1 demonstrates that just from a DT and multiple rhetoric relations of elaboration and a single instance of background, it is unclear whether an author argues with his opponents or enumerating some observations.
"Theranos remains actively engaged with its regulators, including CMS and the FDA, and no one, including the Wall Street Journal, has provided Theranos a copy of the alleged complaints to those agencies. Because Theranos has not seen these alleged complaints, it has no basis on which to evaluate the purported complaints."
For the following paragraph Fig. 2 shows the DT with additional communicative actions labels which help to identify presence of argumentation. When arbitrary communicative actions are attached to DT as labels of its terminal arcs, it becomes clear that the author is trying to bring her point across and not merely sharing a fact.
"But Theranos has struggled behind the scenes to turn the excitement over its technology into reality. At the end of 2014, the lab instrument developed as the linchpin of its strategy handled just a small fraction of the tests then sold to consumers, according to four former employees."
3.2 Definition
As it can be seen from this example to show the structure of arguments we need to know the discourse structure of interactions between agents, and what kind of interactions they are. We do not need to know domain of interaction (here, health), the subjects of these interaction (the company, the journal, the agencies), what are the entities, but we need to take into account mental, domain-independent relations between them.
Communicative discourse tree (CDT) [7] is a DT with labels for arcs which are the VerbNet expressions for verbs which are communicative actions (CA). The arguments of verbs are substituted from text according to VerbNet frames. Arguments of verbs are substituted from text according to VerbNet frames. The first, possibly second and third argument are instantiated by agents and the last ones by noun or verb phrases. These phrases are subjects of communicative action.
CA can take the form of verb (agent, subject, cause) where verb characterizes, for example, some sort of interaction between a customer and company in a complaint scenario (e.g., explain, confirm, remind, disagree, deny), agent identifies either the customer or the company, subject refers to the information transmitted or object described, and cause refers to the motivation or explanation for the subject. A communicative action associated with some customer claim such as I disagreed with the overdraft fee you charged me because I made a bank deposit well in advance would be represented as: disagree (customer, overdraft fee, I made a bank deposit well in advance). VerbNet frames are used to apply the computational part of Speech Act theory to discourse analysis, formalizing CAs.
For the details of DTs we refer the reader to [12], and for VerbNet Frames to [14]. To build CDT automatically we combined together discourse parsers [13, 35] with our own modules focused on extracting and information from VerbNet [28] into one Java-oriented system. Our project and examples of CDT representation can be found at GitHub.
4 Text Classification Settings
To evaluate the contribution of our sources, we use two types of learning on CDT graph representations of a paragraph. 1) Nearest Neighbour (kNN) learning with explicit engineering of graph descriptions. We measure similarity as an overlap between the graph representation of a given text and that of a given element of a training set. 2) Statistical tree kernel learning of structures with implicit feature engineering.
We consider standalone discourse trees and scenario graphs built on communicative actions extracted from the text as well as full CDT graphs.
Our family of pre-baseline approaches is based on keywords and keywords statistics. Since mostly lexical and length-based features are reliable for finding poorly supported arguments [34], we combined non-name entities as features together with the number of tokens in the phrase which potentially expresses argumentation.
4.1 Nearest Neighbour
To predict the label of the text, once the CDT is built, one needs to compute its similarity with CDTs for the positive class and verify that it is lower than similarity to the set of CDTs for its negative class. Similarity between CDT’s is defined by means of maximal common sub-CDTs [7]. Formal definitions of labeled graphs and domination relation on them used for construction of this operation can be found, e.g., in [8]. To handle meaning of words expressing the subjects of edge label, we also apply word2vec models [22, 23]. Similarity of meaning is calculated on a word-by-word basis: if two words are in the same syntactic role, only then they are matched. For computing maximal common sub-CDT we developed our own programming module which is also integrated into the project mentioned above.
4.2 SVM Tree Kernel
In this study we extend the tree kernel definition for the CDT, augmenting DT kernel by the information on communicative actions [7]. A CDT can be represented by a vector of integer counts of each sub-tree type (without taking into account its ancestors). The terms for Communicative Actions as labels are converted into trees which are added to respective nodes for RST relations. For Elementary Discourse Units (EDUs) as labels for terminal nodes only the phrase structure is retained: we label the terminal nodes with the sequence of phrase types instead of parse tree fragments. For the evaluation purpose we used Tree Kernel builder tool [25].
5 Datasets
5.1 New Intense Argumentation Dataset
The set of tagged customer complaints about financial services is available at GitHub.
The purpose of this dataset is to collect texts where authors do their best to bring their points across by employing all means to show that they are right and their opponents are wrong. Complainants are emotionally charged writers who describe problems they encountered with a financial service and how they attempted to solve it. Raw complaints are collected from PlanetFeedback.com for a number of banks submitted in 2006–2010. Four hundred complaints are manually tagged with respect to perceived complaint validity, proper argumentation and detectable misrepresentation.
Judging by complaints, most complainants are in genuine distress due to a strong deviation between what they expected from a service, what they received and how it was communicated. Most complaint authors report incompetence, flawed policies, ignorance, indifference to customer needs and misrepresentation from the customer service personnel. The authors are frequently exhausted from communicative means available to them; they could be confused, seeking recommendation from other users. The focus of a complaint is a proof that the proponent is right and her opponent is wrong, resolution proposal and a desired outcome.
Multiple argumentation patterns are used in complaints. The most frequent is a deviation from what has happened from what was expected, according to common sense. This pattern covers both valid and invalid argumentation. The second in popularity argumentation patterns cites the difference between what has been promised (advertised, communicated) and what has been received or actually occurred. This pattern also mentions that the opponent does not play by the rules (valid pattern).
A high number of complaints are explicitly saying that bank representatives are lying. Lying includes inconsistencies between the information provided by different bank agents, factual misrepresentation and careless promises (valid pattern).
Another reason complaints arise is due to rudeness of bank agents and customer service personnel. Customers cite rudeness in both cases, when the opponent point is valid or not (and complaint and argumentation validity is tagged accordingly). Even if there is neither financial loss or inconvenience the complainants disagree with everything a given bank does, if they been served rudely (invalid pattern).
Complainants cite their needs as reasons bank should behave in certain ways. A popular argument is that since the government via taxpayers bailed out the banks, they should now favor the customers (invalid pattern).
We refer to this dataset as Intense because of the amount, strength and emotional load of customer complaints. For a given topic such as insufficient funds fee, this dataset provides many distinct ways of argumentation that this fee is unfair. Therefore, Intense Argumentation dataset allows for systematic exploration of the topic-independent clusters of argumentation patterns and observe a link between argumentation type and overall complaint validity. Other argumentation datasets including legal arguments, student essaysFootnote 3, internet argument corpusFootnote 4, fact-feeling datasetFootnote 5, political debates have a strong variation of topics so that it is harder to track a spectrum of possible argumentation patterns per topic. Unlike professional writing in legal and political domains, authentic writing of complaining users have a simple motivational structure, a transparency of their purpose and occurs in a fixed domain and context. In the Intense Argumentation Dataset, the arguments play a critical rule for the well-being of the authors, subject to an unfair charge of a large amount of money or eviction from home. Therefore, the authors attempt to provide as strong argumentation as possible to back up their claims and strengthen their case.
Using our Intense Dataset, one can find correlation between argumentation validity, truthfulness and overall complaint validity. If a complaint is not truthful it is usually invalid: either a customer complains out of a bad mood or she wants to get compensation. However, if the complaint is truthful it can easily be invalid, especially when arguments are flawed. When an untruthful complaint has valid argumentation patterns, it is hard for an annotator to properly assign it as valid or invalid. Three annotators worked with this dataset, and inter-annotator agreement exceeds 80%.
5.2 Additional Evaluation Dataset
For the particular task described in this paper we collected a large dataset, which includes the Intense Argumentation Dataset described in the previous section.
Evaluation dataset was divided into two parts: “positive” and “negative”. Texts from the first part are expected to contain any kind of argumentation inside. We formed the positive dataset from a few sources to make it non-uniform and pick together different styles, genres and argumentation types. First we used a portion of data where argumentation is frequent, e.g. opinionated data from newspapers such as The New York Times (1400 articles), The Boston Globe (1150 articles), Los Angeles Times (2140) and others (1200).
As it was mentioned earlier we also used our new Intense Dataset. Besides, we use the text style & genre recognition dataset [18] which has a specific dimension associated with argumentation (the section [ted] “Emotional speech on a political topic with an attempt to sound convincing”). And we finally add some texts from standard argument mining datasets where presence of arguments is established by annotators: “Fact and Feeling” dataset [26] (680 articles) and dataset “Argument annotated essays v.2” [33](430 articles).
Negative part of the dataset consists of texts written in a neutral manner. We use Wikipedia (3500 articles), factual news sources (Reuters feed with 3400 articles) and also [18] dataset including such sections of the corpus “Instructions for how to use software” (320 articles); [tele], “Instructions for how to use hardware” (175 articles); [news], “A presentation of a news article in an objective, independent manner” (220 articles), and other mixed datasets without argumentation (735 articles). Both datasets include 8800 texts.
We used Amazon Mechanical Turk to confirm that the positive dataset includes argumentation in a commonsense view, according to the employed workers. Twelve workers who had the previous acceptance score of above 85% were assigned the task to label.
6 Evaluation
For the evaluation we split out dataset into the training and test part in proportion of 4:1 and balanced the split with respect to the label and to the source.
Extremely naive approach is just relying on keywords (bag-of-words) to figure out a presence of argumentation. The hypothesis here is that people use different words to describe facts vs words to back them up and explicitly provide argumentation. Usually, a couple of communicative actions so that at least one has a negative sentiment polarity (related to an opponent) are sufficient that argumentation is present. In Table 1, we see that this naive approach is outperformed by the top performing CDT approach by 22%. A Naive Bayes classifier delivers just 2% improvement. One can observe that for nearest neighbor learning DT and scenario graphs based on CA indeed complement each other, delivering f-measure of full CDT 17% above the former and 19% above the latter. Just CA delivered worse results than the standalone DT.
Nearest neighbor learning for full CDT achieves slightly lower performance than SVM TK for full CDT, but the former gives interesting examples of sub-trees which are typical for argumentation, and the ones which are shared among the factual data. The number of the former groups of CDT sub-trees is naturally significantly higher (Table 2).
Table 3 shows the SVM TK argument detection results per source. As a positive set, we now take individual source only. The negative set is formed from the same sources but reduced in size to match the size of a smaller positive set. The cross-validation settings are analogous to our assessment of the whole positive set. We did not find correlation between the peculiarities of a particular domain and contribution of discourse-level information to argument detection accuracy. At the same time, all these four domains show monotonic improvement when we proceed from Keywords and Naive Bayes to CDT. Since all four sources demonstrate the improvement of argument detection rate due to CDT, we conclude that the same is likely for other source of argumentation-related information.
7 Conclusion
In this study we addressed an issue of argumentation detection in text using it communicative and discourse structure. We described a few representations that capture this kind of structure. We then compared two learning methods working over these representations. Performances of these learning methods showed that the bottleneck of text classification based on textual discourse information is in the representation means, not in the learning method itself. Comparing inductive learning results with the kernel-based statistical learning, relying on the same information allowed us to perform more concise feature engineering than either approach would do. We see that text classification, based on Nearest Neighbour learning, shows better results with Communicative discourse tree features than with only Discourse Tree features or with only communicative actions features. We also built and published a set of tagged customer complaints about financial services which can be used for the future case studies and research in argumentation mining and text classification.
Notes
References
Azar, M.: Argumentative text as rhetorical structure: an application of rhetorical structure theory. Argumentation 13(1), 97–144 (1999)
Biran, O., Rambow, O.: Identifying justifications in written dialogs by classifying text as argumentative. Int. J. Semant. Comput. 5(04), 363–381 (2011)
Carlson, L., Marcu, D., Okurowski, M.E.: Building a discourse-tagged corpus in the framework of rhetorical structure theory. In: van Kuppevelt, J., Smith, R.W. (eds.) Current and New Directions in Discourse and Dialogue. Text, Speech and Language Technology, vol. 22, pp. 85–112. Springer, Dordrecht (2003). https://doi.org/10.1007/978-94-010-0019-2_5
Feng, V.W., Hirst, G.: Classifying arguments by scheme. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 987–996. Association for Computational Linguistics (2011)
Florou, E., Konstantopoulos, S., Koukourikos, A., Karampiperis, P.: Argument extraction for supporting public policy formulation. In: Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pp. 49–54. Citeseer (2013)
Freeman, J.B.: Dialectics and the Macrostructure of Arguments: A Theory of Argument Structure, vol. 10. Walter de Gruyter, Berlin (1991)
Galitsky, B.: Discovering rhetorical agreement between a request and response. Dialogue Discourse, 167–205 (2017)
Ganter, B., Kuznetsov, S.O.: Pattern structures and their projections. In: Delugach, H.S., Stumme, G. (eds.) ICCS-ConceptStruct 2001. LNCS (LNAI), vol. 2120, pp. 129–142. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44583-8_10
Ghosh, D., Khanam, A., Han, Y., Muresan, S.: Coarse-grained argumentation features for scoring persuasive essays. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 549–554. Association for Computational Linguistics (2016)
Ghosh, D., Muresan, S., Wacholder, N., Aakhus, M., Mitsui, M.: Analyzing argumentative discourse units in online interactions. In: Proceedings of the First Workshop on Argumentation Mining, pp. 39–48 (2014)
Habernal, I., Gurevych, I.: Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional LSTM. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1589–1599. Association for Computational Linguistics (2016)
Joty, S., Carenini, G., Ng, R.T.: A novel discriminative framework for sentence-level discourse analysis. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 904–915. Association for Computational Linguistics (2012)
Joty, S., Moschitti, A.: Discriminative reranking of discourse parses using tree kernels. a) A 4(5), 6 (2014)
Kipper, K., Korhonen, A., Ryant, N., Palmer, M.: A large-scale classification of English verbs. Lang. Resour. Eval. 42(1), 21–40 (2008)
Kirschner, C., Eckle-Kohler, J., Gurevych, I.: Linking the thoughts: analysis of argumentation structures in scientific publications. In: Proceedings of the 2nd Workshop on Argumentation Mining, pp. 1–11 (2015)
Lawrence, J., Reed, C.: Combining argument mining techniques. In: NAACL HLT 2015, p. 127 (2015)
Lawrence, J., Reed, C., Allen, C., McAlister, S., Ravenscroft, A.: Mining arguments from 19th century philosophical texts using topic based modelling. In: Proceedings of the First Workshop on Argumentation Mining, pp. 79–87. Association for Computational Linguistics (2014)
Lee, D.Y.: Genres, registers, text types, domains and styles: clarifying the concepts and navigating a path through the BNC jungle. Technology 5, 37–72 (2001)
Levy, R., Bilu, Y., Hershcovich, D., Aharoni, E., Slonim, N.: Context dependent claim detection. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pp. 1489–1500. Dublin City University and Association for Computational Linguistics (2014)
Lippi, M., Torroni, P.: Context-independent claim detection for argument mining. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pp. 185–191 (2015)
Mann, W.C., Thompson, S.A.: Rhetorical structure theory: toward a functional theory of text organization. Text-Interdisc. J. Study Discourse 8(3), 243–281 (1988)
Mikolov, T., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems (2013)
Mikolov, T., Chen, K., Corrado, G.S., Dean, J.A.: Computing numeric representations of words in a high-dimensional space (19 May 2015), uS Patent 9,037,464
Misra, A., Anand, P., Fox Tree, J.E., Walker, M.: Using summarization to discover argument facets in online idealogical dialog. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 430–440. Association for Computational Linguistics (2015)
Moschitti, A.: Efficient convolution kernels for dependency and constituent syntactic trees. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 318–329. Springer, Heidelberg (2006). https://doi.org/10.1007/11871842_32
Oraby, S., Reed, L., Compton, R., Riloff, E., Walker, M., Whittaker, S.: And that’s a fact: Distinguishing factual and emotional argumentation in online dialogue. In: NAACL HLT 2015, p. 116 (2015)
Palau, R.M., Moens, M.F.: Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the 12th International Conference on Artificial Intelligence and Law, pp. 98–107. ACM (2009)
Palmer, M.: Semlink: linking propbank, verbnet and framenet. In: Proceedings of the Generative Lexicon Conference, pp. 9–15 (2009)
Peldszus, A., Stede, M.: Rhetorical structure and argumentation structure in monologue text. In: Proceedings of the 3rd Workshop on Argument Mining, ACL 2016, pp. 103–112 (2016)
Peldszus, A., Stede, M.: An annotated corpus of argumentative microtexts. In: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015, vol. 2, pp. 801–815. College Publications, London (2016a)
Peldszus, A., Stede, M.: Rhetorical structure and argumentation structure in monologue text. In: Proceedings of the 3rd Workshop on Argumentation Mining, pp. 103–112. Association for Computational Linguistics (2016b)
Prasad, R., et al.: The Penn discourse treebank 2.0. In: LREC. Citeseer (2008)
Stab, C., Gurevych, I.: Recognizing the absence of opposing arguments in persuasive essays. In: ACL 2016, p. 113 (2016)
Stab, C., Gurevych, I.: Recognizing insufficiently supported arguments in argumentative essays. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pp. 980–990. Association for Computational Linguistics, April 2017
Surdeanu, M., Hicks, T., Valenzuela-Escárcega, M.A.: Two practical rhetorical structure theory parsers. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 1–5 (2015)
Swanson, R., Ecker, B., Walker, M.: Argument mining: extracting arguments from online dialogue. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 217–226. Association for Computational Linguistics, Prague, Czech Republic (2015)
Toulmin, S.E.: The Uses of Argument. Cambridge University Press, Cambridge (1958)
Walton, D.: Argumentation theory: a very short introduction. In: Simari, G., Rahwan, I. (eds.) Argumentation in Artificial Intelligence, pp. 1–22. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-98197-0_1
Walton, D., Reed, C., Macagno, F.: Argumentation Schemes. Cambridge University Press, Cambridge (2008)
Walton, D.N.: Argumentation Schemes for Presumptive Reasoning. L. Erlbaum Associates, Mahwah (1996)
Webber, B., Egg, M., Kordoni, V.: Discourse structure and language technology. Nat. Lang. Eng. 18(04), 437–490 (2012)
Acknowledgments
This article was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) and supported within the framework of a subsidy by the Russian Academic Excellence Project ‘5-100’. It was supported by the RFBR grants 16-29-12982, 16-01-00583.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this paper
Cite this paper
Galitsky, B., Ilvovsky, D., Pisarevskaya, D. (2023). Argumentation in Text: Discourse Structure Matters. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2018. Lecture Notes in Computer Science, vol 13396. Springer, Cham. https://doi.org/10.1007/978-3-031-23793-5_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-23793-5_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23792-8
Online ISBN: 978-3-031-23793-5
eBook Packages: Computer ScienceComputer Science (R0)