1 Introduction

With the growing internet and technology, a large amount of information content is present on online community networks as multimodal data (Text, Pictures, and videos). If we look at the statistics, we can visualize that a large number of people using social media is escalating at a very great speed, and people can easily present their views to each other via various social media platforms. The type of content on the online social media stages contributes to the propagation of hate speech and misleading people. Right now, controlling this kind of media information is very important. Therefore, hate speeches harm individuals and impact society by raising hostility, terrorist attacks, child pornography, etc. Figure 1 shows a portion of hate speech and offensive expressions posted on social media or the web. Figure 1(a) shows a clear example of encouraging violence during huge fights against CAA, NRC, and NPR across India in Jan 2020 [1]. Figure 1(b) shows the tweet released under #putsouthafricansfirst, a person openly tweeting to attack the foreigners working in South Africa. Figure 1(c) shows a tweet posted in 2014 advocating killing Jewish people for fun after the synagogue shooting in Pittsburg [2]. Figure 1(d) shows a post posted in Jan 2018 that a supreme leader is giving a genuine threat statement to the US for war [3].

Fig. 1
figure 1

Examples of hate speech and offensive expressions present over social media

The recent instances of high-profile politicians making speeches were an apparent attempt at inciting violence, which led to large-scale violence. These instances are yet to be dealt with by law enforcement agencies. Hence, the integrity of identifying hate instances is one of the most significant challenges in social media stages, and research-based analysis of this type of content is necessary. The following section describes the definition analysis of hate speech from various sources.

1.1 Hate speech: definition perspective and analysis

There is a general agreement among researchers to define hate speech, and researchers have described it as a language that attacks an individual or a society dependent on characteristics like race, shading, nationality, sex, or religion [4]. This section provides some state-of-the-art definitions of hate speech (Table 1). Although many authors and social media platforms have given their purposes for hate speech, researchers are following them to understand the forms and classifications of hate speech. The definitions from the various sources are as follows:

  • Some of the scientific definitions include the community's perspective.

  • Major social networking sites like Facebook, YouTube, and Twitter are the most used platforms where hate speech occurs regularly.

Table 1 Some of the prominent definitions by some state-of-the-art

The definition analysis (Table 2) mainly relies on various sources like multiple definitions from scientific papers and powerful social media platforms. The dimensions used for analysis are “violence,” “attack,” “specific targets,” and “status.”

Table 2 Definition Analysis

After a thorough definition analysis, we have also portrayed the definition of Hate Speech as follows:

Hate Speech is a toxic speech attack on a person’s individuality and likely to result in violence when targeted against groups based on specific grounds like religion, race, place of birth, language, residence, caste, community, etc.”

1.2 Hate speech: forms and related words

Figure 2 shows significant hate forms of speech like Cyberbullying, Toxicity, Flaming, Abusive Language, Profanity, Discrimination, etc., and Table 3 presents the definitions of the above forms of hate speech found in the literature with their distinction from hate speech.

Fig. 2
figure 2

Forms of Hate speech

Table 3 Comparison between Hate speech and its various forms

Hence, analyzing hate speech on the web is one of the critical areas to study due to the following reasons:

  • Reduce conflicts and disputes created among human beings due to toxic language and offensive expressions.

  • The broad availability and notoriety of online web-based media, like Facebook, Twitter, Instagram, web journals, microblogs, assessment sharing sites, and YouTube, boost communication and allow people to freely share information in the form of their thoughts, emotions, and feelings among strangers.

  • Moreover, click baiting takes massive attention and encourages visitors to click on the link, harming readers' emotions.

  • Hate speeches can incite violence and cause irreparable loss of life and money.

  • The latest incident was triggered by online hate speech in the Philippines, citing the example of the Christchurch mosque shooting in 2019 [20].

  • To forestall bigot and xenophobic viciousness and separation spread among Asians and individuals of Asian drop uniquely in this pandemic. As per the report distributed by US Today in May 2021, more than 6600 hate and offensive incidences against Asian- Americans and Asians have been accounted for [6].

  • To save our society from being gravely damaged.

From the points mentioned above, it has been observed that detecting and restraining hate speech at an initial stage is very crucial and, indeed, a challenging task. Major online media stages like Facebook, Twitter, and YouTube are trying to eliminate hate speeches and other harmful content at an initial step as part of their ongoing projects, using advanced AI techniques. However, keeping an eye on an individual is vital to have hate off platforms. Social media platforms and an individual can adopt the following suggestions:

  • The most significant source of hate speech on the internet is trolls. A person should block, mute, or report these trolls instead of giving recognition.

  • A person should do a proper data analysis and facts before forwarding the posts.

  • Social media firms should follow strong policy rules against abusive behavior.

The following section describes the general framework of hate speech detection adopted by several researchers.

1.3 Motivation

Recently, it has been observed that the number of users are actively involving on social media in the forms of WhatsApp post, Facebook posts, YouTube shorts, reviews, comments etc. on various topics. People are sharing their views resulting in tremendous amount of data on the web. The data should be analyzed for further research. Giving the various hate form definitions, their analysis and to highlight the motivation behind the hate content detection in every aspect, we briefly discuss the recent works in this area in terms of various methodologies, modalities, performances, benchmarks etc. The further future trends are also highlighted giving the motivation to researchers for detecting hate content.

1.4 General framework of hate speech detection

Figure 3 provides a framework for the process of hate speech identification. The foremost step is to search the powerful source platform where most hate speech/ offensive languages occur. Most state-of-the-art adopted significant social media firms like Facebook and Twitter. The second step is to collect data either in the form of posts or tweets. Gathering a great measure of information from web-based media stages nowadays is one of the significant research challenges for researchers and academia. The platforms provide a simple and quick approach to gathering and storing information through inbuilt APIs [21]. Figure 4 shows the types of data accessible and non-accessible via social media, respectively.

Fig. 3
figure 3

General framework of hate speech detection

Fig. 4
figure 4

a Type of data accessible via some social media networks and b Type of data not accessible via some social media networks [22,23,24]

A large amount of hate speech data collection is from two powerful social media platforms: Twitter, Instagram, Facebook, and these two platforms are actively working on combating hate speeches. The next phase includes data normalization and feature extraction for training a model, and the last step performs classification to classify the problem.

Most literature surveys ([14, 25,26,27, 4, 28,29,30,31]) have been published till now. Table 4 compares our survey with related surveys in various aspects like definition analysis, comparison with other hate forms, NLP aspect in terms of modalities, and explanation of models and datasets. This paper also gives an itemized portrayal of hate speech identification in multimodal information by considering major phases like data collection, text mining approaches in automatic hate speech detection, and different machine and deep learning approaches. This paper is a more detailed and systematic survey considering various parameters in terms of datasets, methods, etc. as follows:

  • The study of detecting online hate content has been growing only in the last few years, and machine learning is more prominent. This survey covers the job done in deep and hybrid architectures to determine the issue of recognizing hate speech. This survey also covers the feature extraction methods used in automatic hate speech detection.

  • The beauty of this literature covers possible identified merits and demerits of the recent state-of-the-art works, their fundamental aspects, and techniques used in tabular form.

  • Another beauty of this survey is that it covers publicly available datasets, dataset challenges, and benchmark models.

  • Most previous surveys are on textual data, and limited literature is particularly for detecting hate speech in multimodal information. Therefore, this review paper also considers multimedia data (such as text, images, and videos) to highlight the detection process and the previous works on multilingual hate speech detection. However, this survey incorporates some famous works done for multilingual and multimodal data in this field.

  • The survey also considers the current challenges and possible future directions; which researchers can view as further work in this area.

Table 4 Comparison table of related surveys

1.5 Review technique

However, various investigations have been distributed in earlier years identifying hate speech, yet this survey contains some noticeable works in this field. We considered influential journals, conferences, and workshops from various online databases such as IEEE Xplore, Science Direct, Springer, ACM Digital Library, MDPI, CEUR Proceedings, etc. The comprehensive survey contains a review of more than 120 articles based on keywords like “hate speech detection,” “offensive language detection,” “multilingual,” “images,” “videos,” etc. It has been observed from Fig. 5 that the number of articles published on hate speech detection seemed to increase yearly in the last five years.

Fig. 5
figure 5

Year-wise contribution of a research article on Hate Speech Detection over the last five years

This survey presents a comprehensive analysis of hate speech detection research arena as shown in Fig. 6. It breaks down the hate speech detection into several meaningful categorizations such as types of hate speeches, approaches, datasets, feature extraction methodologies etc. Special attention has been paid in the exploration of multimodal and multilingual approaches of better classification capabilities. Specifically, feature extraction methods such as Bag-of-Words, N-grams, Lexicon & Sentiment based features, TF-IDF, part of speech, word references and rule based are analyzed and also presented visually in Fig. 7. Next, hate speech detection methods are discussed in great detail categorizing them into traditional machine learning based and deep learning based approaches. A separate section elaborates the merits and demerits of the same. Special analysis of publicly available multimodal hate speech datasets is also presented that dives into the challenges posed by each of the multimodal hate speech datasets. Evaluation metrics and performance benchmarks are presented to highlight the effectiveness of current state-of-the-art approaches for hate speech detection.

Fig. 6
figure 6

Organization of the survey

Fig. 7
figure 7

Common feature extraction techniques

This survey is organized as follows. The introduction is discussed in Sect. 1, whereas Sect. 2 describes possible feature extraction techniques in context with NLP Aspect for automatic hate speech detection. Section 3 covers the most vital work using different methodologies like machine and deep learning and the conversation on multilingual work. Section 4 highlights the challenges related to hate speech datasets and their benchmark models. In contrast, Sect. 5 depicts the various evaluation metrics and performance measures. Finally, Sects.  6 and 7 portrays the conclusion and further future directions respectively.

2 Feature extraction techniques in automatic hate speech detection

A feature is the closed characteristics of an entity or a phenomenon. [32] Focus on natural language processing (NLP) to explore the automation of understanding human emotions from texts. This section provides various text features used to extract hate content (Fig. 7). Word references and lexicons are the most straightforward and basic approaches for feature extraction in text analysis. Identifying the appropriate features for classification is more tedious when using machine learning. The fundamental step in traditional and deep learning models is tokenization, in which the primary and straightforward approach is dictionaries/ lexicons. Dictionary is a method that generates a set of words to be looked at and included in the text. Frequencies of terms are used directly as features. Features play an essential role in machine learning models. Machine learning approaches cannot work on raw data, so feature extraction techniques are needed to convert text into vectors of features. Many basic features like BOW, Term Frequency- inverted Term Frequency, Word references, etc., are used.

2.1 Bag-of-words (BOW)

BOW is an approach like word references extensively used for document classification ([14, 33,34,35]). The frequency of each word is used as a characteristic for training a classifier after gathering all the words. The burden of this technique is that the sequencing of words is disregarded, whether it is syntactic or semantic information. Both pieces of information are crucial in detecting hate content. [36] Used BOW to represent Arabic hate features as text pre-processing before applying various machine learning classifiers. [37] Derived a method for detecting Arabic religious hate speech using different features with the machine and deep learning models. Consequently, it can prompt misclassification of whether the terms are utilized in multiple contexts. N-grams were executed to overcome the issue.

2.2 N-grams

The N-grams approach is the most utilized procedure in identifying hate speech and offensive language ([12, 18, 33, 38,39,40,41]). The most widely recognized N-grams approach combines the words in sequence into size N records. The objective is to enumerate all size N expressions and check their events. It further increases the performance of all classifiers since it incorporates each word context [42]. Rather than utilizing words, it is additionally conceivable to use the N-grams approach along with characters. [43] Proved character N-gram features are more predictive in detecting hate speech than token N-gram features, whereas it is not valid in the case of identifying offensive language. Although N-grams also have limitations, like all the related words have maximum distance in a sentence [33], an answer for this issue lies in incrementing the N value. However, it lowers the processing speed [15]. [39] Proved that greater N values perform better than lower N-values (unigrams and trigrams). The authors [4, 38] observed that character N-gram features perform better when combined with extra-linguistic features. The authors generated one hot N-gram and N-gram embedding feature to train the model and analyzed better performance by N-gram embedding [44].

2.3 Lexicon-based and sentiment based

Lexical features use unigrams, and bigrams of the target word, whereas syntactic features include POS tags and various components from a parse tree. The parser used in NLP, proposed by the Stanford NLP Group [45], was used to catch the linguistic conditions inside a sentence [15]. Lexicon-based methods are crucial in identifying the sentiments of speech. For example, nigga is an offensive word and must be prohibited in ordinary language [46]. Hateful speech on a social stage cannot be a positive polarity because awful grammar provides a negative inclination by the speaker to the listeners and readers. Authors in ([12, 39, 47,48,49,50]) [51]) consider sentiments as a characteristic for identifying hate speech. Some authors [39] used the sentiment features in combination with others, which proved in result enhancement. [52] Presents metaheuristic approach for sentiment analysis and proved that the optimization methods can be alternatively used against machine leaning models with promising results.

2.4 Topic modeling

This method is also famous for topic classification, which focuses on extracting topics that occur in a corpus. Topic modeling is also used for detecting hateful comments from central social media platforms like Youtube [53]. [54]) used the Latent Dirichlet Allocation model ([55]) to discover abstract topics and use them in classifying multimodal data. [56] Derived text clusters from LDA for multilingual hate speech detection and proved that topic modeling is not giving any major incite for classification.

2.5 TF-IDF

TF-IDF is a scoring measure broadly used in information retrieval and is planned to reflect how important a term is in a given record. TF-IDF is the most common feature extraction technique used by traditional classification methods for hate speech identification ([35, 57, 58]). TF-IDF differs from a bag of words technique or N-gram technique because the word recurrence offsets the frequency of each term in the corpus, which clarifies that a few words show up more often than expected (for example, stop words). [59] Used N-grams and TF-IDF values to perform a comparative analysis of the machine learning models to detect hate speech and offensive language and claimed that the L2 normalization of TF-IDF outperforms the baseline results.

2.6 Part-of-speech

POS tagging is a well-known task in NLP. This approach refers to the technique of classifying words into their parts of speech. Moreover, it improves the value of the context and identifies the word's role in the context of a sentence [60]. Some authors [40] used this approach to classify racist text. PoS tagging with TF-IDF gives a better result in Indonesian Hate Speech Detection [61].

2.7 Word embedding

The most widely recognized technique in text analysis of hate content is the utilization of word references. This methodology comprises all words (the word reference) that are looked at and included in the message. The frequencies are utilized straightforwardly as features and for calculating scores. In NLP, Word embedding is used for representing of words while performing text analysis [62]. Uses word2vec embedding for extracting hate content features for grouping the semantically related words. [63] Applies attention based neural networks and word embedding feature extraction methods for classification. Hate speech detection in Spanish language [64] uses word embedding methods like Word2Vec, Glove, FasText for feature extraction.

Another procedure used in text analysis of hate content is the distance metric, which can be used to supplement word reference-based methodologies. A few investigations have called attention when the negative words are obscured with a purposeful incorrect spelling [65]. Instances of these terms are @ss, sh1t [18], nagger, or homophones, for example, joo [65].

2.8 Rule-based approach

Text analysis uses a rule-based feature selection technique for finding the regularities in data, for example, IF–THEN clauses. [66] Proves that rule-based methods do not include learning but depends on word reference of subjectivity pieces of information. This particular approach is used to extract subjective sentences to generate hate content classifiers for unlabeled corpus [48]. [67] works on the combination of dictionary-based classifiers along with rule-based classifiers to generate the semantic features for hate speech classification.

3 Automatic hate speech identification approaches

This segment describes the research on hate speech identification using various models by establishing a thorough subjective and quantitative examination of what specifies multilingual hate speech. This section compares different machine and deep learning models for detecting hate speech in multiple languages, along with the labels/classification and datasets used. Authors also compares the deep learning based models with shallow based [68] learning models. Figure 8 shows various traditional and deep learning models used to identify hate speeches. It has also been observed that in the past few years, most work has been done on the general English language using various machine-learning models. It has also been seen that the results of deep learning and hybrid learning models outperformed using precision and recall. Following two Sects. 3.1 and 3.2, describes the sub-domain AI approaches to multilingual data.

Fig. 8
figure 8

Taxonomy of Hate Speech Detection considering various models

3.1 Machine learning approaches to hate speech detection

Several machine learning models are being created to perform tasks like classification, prediction, clustering, etc. Machine learning models are also able to take advantage of data availability. Labeled data, which is utilized for training the model to achieve reliable accuracy, comes under the classification task. The machine learning algorithms performance directly depends on how accurately the features are identified or extracted. Classification algorithms perform detection tasks after normalizing the text. The efficacy of a model on a combination of several datasets is always better than training on a specific dataset [69]. Machine-learning algorithms are categorized as supervised, semi-supervised, and unsupervised methods. Researchers used these methods to detect online hate data in various languages. Out of different machine learning models, researchers primarily use SVM to classify social media data as hate or non-hate. Random Forest holds the second position, and so forth [70]. From Table 5, it has been clearly seen that most research is being conducted on the general English language using supervised machine learning methods. Some authors investigated the impact of pre-processing techniques [36] to improve text quality and mainly to retain the features without losing information [71] for better performance. A piece of research on multi-class classification on some datasets [18] carried out a machine learning-based approach for classifying online user comments into four classes (Clean, Hate, Derogatory, and Profanity) on the Amazon dataset. As Supervised, the learning approach is area subordinate since it depends on manually marking a massive volume of information.

Table 5 Recent state-of-the-artwork for detecting hate speech in various languages using machine learning models

The advantage of manual labeling is its efficiency for domain-dependent tasks, while limitation occurs in execution time. The authors trained a supervised machine learning text classifier and used human-annotated data from the Twitter dataset to train and test the classifiers [72]. [72] The Bayesian logistic regression model is used to classify twitter data into hateful and antagonistic labels. Authors [73] focus on South Asian Languages for evaluating and comparing the effectiveness of various supervised techniques for hate speech detection. Semi-supervised learning algorithms are prepared to utilize both labeled and unlabeled information. Labeling information related to unlabeled information can viably upgrade efficiency. [74] Analyzed that unsupervised learning has a limited capacity to deal with limited-scale events, whereas supervised learning can adequately catch small-scale events; however, manually labeling the informational collection lowers the model scalability. [75] Utilized several choices of machine learning classifiers with various vector representations like TF-IDF, Count Vectorizer, and Word2vec as baselines with their own created Urdu dataset. KNN is the most widely used choice when a classification task is considered in a supervised learning approach [76], [77]. [78] Build an ensemble system utilizing various traditional machine learning (LR, SVM, RF, MNB, and XGB) for detecting sentiments.

When working with a particular language, the task can be considered an area-dependent task. The first racial-oriented research was carried out by [34], who carried out a supervised model to distinguish bigoted tweets [72]. Trained a supervised machine learning text classifier and used human-annotated data from Twitter to train and test the classifiers. [50] Proposed an approach for distinguishing hate speech for the Italian language on Facebook.

Authors [79] explored the capacity to recognize hate in the Indonesian language. The best outcomes from [48] were acquired when they consolidated semantic hate and theme-based components. SVM outperformed CNN and the ensemble approach in all subtasks of HaSpeeDe [80], using hate-rich embeddings [81]. Due to the many users available worldwide, multilingual hate speeches are spreading across the continent in different forms.

3.2 Deep learning approach to hate speech detection

Deep learning architectures (Table 6) represent a promising future in text analysis tasks. It relies totally upon artificial neural networks to investigate the patterns in the text with extra depth. In a couple of years, deep learning methods have outperformed machine learning methods in terms of performance due to the availability of large datasets. From the previous works, RNN and CNN are the most generally utilized deep learning models for NLP tasks. The execution of these two profound neural networks is a bit troublesome because of their intricate architectures. RNN has two sorts: LSTM and GRU, which upholds sequential architectures though CNN has a hierarchical architecture. The efficacy of deep learning methods is directly based on the right choice of algorithms, the number of hidden layers, feature representation techniques, and learning high-level features from the data. Due to the exclusive performance factors, deep learning approaches are not better in every case than conventional methods. For hate speech identification,

Table 6 Generic deep learning architectures

[105] Utilized RNN model with word frequency and their outcomes beat the present state-of-the-art deep learning methods for hate speech identification. Deep learning techniques like automatic prediction, sentiment analysis, and classification are now being used to process hate images. [106] is a collection of memes from various social media platforms like Reddit, Facebook, Twitter and Instagram. The dataset is prepared from the 2016 U.S. Presidential Election Event, a collection of manually annotated image URLs and text embedded in the images, resulted in 743 memes. With respect to the classification of hateful memes, [107,108,109] presents various deep learning models to classify on memes dataset. Out of the researches done so far, [109] presents a visio-linguistic model (VILIO) for hateful memes detection and yields benchmark results. Deep learning strategies are recently being utilized in message characterization and sentiment analysis with maximum exactness [110]. Authors [78] used deep learning and transfer-based models (DNN, DNN with Embedding, CNN, LSTM, Bi-LSTM, m-BERT, distilBERT, XML-RoBERTa, MuRIL) to reduce misclassification rate and to improve prediction rate for understanding code-mixed Dravidian languages. Table 7 shows the recent state-of-the-art for identifying hate speech using deep and hybrid learning methods while considering multiple languages like English, Italian, Arabic, Spanish, etc. As seen in Table 7, deep and hybrid learning models are evolving for classification tasks. Most works have been done on the Twitter dataset in the general English language using supervised approaches ([41, 105, 111,112,113]). Authors [114] show that LSTM is the most effective machine learning method for hate speech identification. [115] uses rule-based clustering methods which outperform the other baseline and state-of-the-art methods like Naive Bayes, BERT, Logistic Regression, RNN, LSTM, CNN-Glove, GRU-3-CNN in terms of AUC, Accuracy, Precision, Recall and F1-Score. [54] Performs semi-supervised multi-task learning utilizing a fuzzy ensemble approach in which they generated sequential and constructive rules to be added to the rule set and Latent Dirichlet Allocation [55] for implementing topic extraction and identifying hate speech forms for four classes from the Twitter dataset. The authors also proved that the fuzzy-based approach [54], metaheuristic approaches ([116, 117]) and Interpretable approach [118] had outperformed other techniques with high detection rate. [119] performs a supervised hybrid learning approach for classifying hate speech into two labels, specifically in the Arabic dialect. Moreover, Baysian attention networks, which follow the architecture of transformer models, are implemented for multilingual (English, Croatian and Slovene) contexts [120].

Table 7 Recent state-of-the-art for detecting hate speech in various languages via deep and hybrid learning models

3.3 Merits and demerits of various models

Hate speech identification is a very much prevalent research field now a day. Researchers worldwide are experimenting with various models for specific field detection with numerous advantages and disadvantages. [121] implemented CNN and BERT models and proved efficient accuracy with intra-domain and cross-domain datasets. ([122, 123]) used FCM, SCM, and TKM for concatenating/combining features extracted from CNN and R-CNN, respectively, on textual and visual Twitter data, giving an advantage resulting in good accuracy compared to other baseline models. [124] used ELMO, BERT, and CNN to improve classification results but with higher time complexity. [125] also have a limitation of higher computational complexity, yet they created their detection system and implemented a deep belief network on labeled and unlabeled data. [116] presents two metaheuristic optimization algorithms (Ant Lion Optimization and Moth Flame Optimization) for the first time to solve Hate Speech Detection Problem with an efficient accuracy of above 90%. [117] implemented enhanced seagull optimization algorithm on CrowdFlower and StormFront datasets claiming the outperforming scores of above 98%. The pros and cons of the latest state-of-the-art works on hate speech detection are shown in Table 8.

Table 8 Merits and demerits depicted from the latest state-of-the-art works in hate speech detection

4 Hate speech datasets

Social media platforms are prevalent nowadays, and users are increasing tremendously. Due to this, hate speech contents in various forms are at its peak. The presence of a massive amount of data on the web and collecting a good and relevant amount of data is challenging for researchers. Social media stages provide simple and easy approaches to gathering data using their APIs [21]. However, data assortment is not confined to APIs only. Figure 9 shows various ways of accessing data from social media.

Fig. 9
figure 9

Prominent ways to access data from social media

4.1 Dataset description

Hate speech identification has become a crucial task in many languages and fields. Recordings play a fundamental role in disseminating content as they can contact a vast crowd, including little youngsters. Appraisals say that 1 billion hours of videos are observed every day on YouTube alone. Detecting hate speech is important to give youngsters a protected climate and a healthy environment for clients in general. Until now, the text has been the most famous configuration utilized by researchers working on it. Subsequently, most current works summarize recognizing hate speech in the text (social platform posts, news remarks, tweets, and so on). While hate speech detection methods primarily use textual inputs, few research contributions exist toward multimodal hate speech detection. Several authors have generated multi-class/ multi-label datasets in various languages for curbing hate content on social media. Hate speech detection (HaSpeeDe) is the prevalent shared task organized within Evalita 2018 [80] and consists of manually annotating Italian messages taken from Twitter and Facebook. This shared task was further categorized into three sub-tasks: HaSpeeDe-FB, HaSpeeDe-TW, and Cross-HaSpeeDe.

4.2 Datasets challenges

  • The available and widely used datasets ([38, 140]) have issues in their subjectiveness which introduces bias in the performances. Hate Speech datasets are affected mainly by social, behavioral, racial, temporal, and content production biases [141]. Data imbalance due to bias may lead to misclassification [142].

  • One of the significant issues is the unlabeled non-English datasets. Few manually annotated labeled datasets were released for detecting offensive language and hate speech [78]. Moreover, Multilingual hate speech datasets can also share the writing of other languages. For example, a dataset can contain Farsi and Arabic tweets while creating an Urdu hate speech and offensive language dataset [75]

  • The problem also arises when the web address of datasets changes [143]. Authors who create a new dataset do not publish those [73].

  • Twitter (Lenient data usage policies) is the most prevalent platform. However, the Twitter resources are significant because of the exceptional classification of the Twitter posts, which is limited to short text. Henceforth, contents from other media stages are longer and can be a piece of more extensive conversation in hate speech.

  • Datasets differ in their size, degree, and features of the data annotated, which prompts the issue of irregularity in the quantity of hate and non-hate texts within datasets. For example, on a social stage like Twitter, hate speech occurs at a shallow rate contrasted to non-hate. Therefore, researchers can gather data from social media platforms with no character length limit.

Given the above challenges, making data available in a superior arrangement for demand research is essential. Table 9 represents various benchmark models on multiple datasets. Commonly used datasets ([12, 38], Gomez et al., 2020)) benchmarks are also shown in the table.

Table 9 Benchmark models on datasets

The overall description of datasets regarding modalities (T-Text, I-Images, V-Videos), classes/ labels, languages, etc., are tabulated in Table 10.

Table 10 Dataset description in terms of size, labels, languages, and modalities

5 Evaluation and performance measures

As datasets play a significant role in testing the performance of hate speech detection. The better-normalized dataset is the best performance an algorithm will give. In this section, metrics for evaluation of machine and deep learning techniques used are F1-Score, Recall, and Precision, and performance measurement metrics are accuracy and AUC (Area under Curve).

5.1 Evaluation metrics

Most state-of-the-art have utilized accuracy, F1-Score, Precision, Recall Metrics, and ROC to assess performance metrics. [132] represents several loss functions like mean MSE, cross-entropy, and likelihood loss to anticipate hate speech in the most used dataset, such as Twitter. The loss function is the difference between the predicted value denoted by ŷ and labeled value denoted by y. [143] use four different strong performances indicators (KPIs), which are the percentage of True Positive, the precision, the recall, & F1- Score defined using Eq. 1:

$${F}_{1}-Score=2\times \frac{\mathrm{P}-\mathrm{R}}{\mathrm{P}+\mathrm{R}}$$
(1)

[132] uses several loss functions such as Mean Square Error Rate (MSE) [163], given in Eq. 2, Cross-Entropy Loss (CEL) [164] as in Eq. 3, and Likelihood Loss (L) [165] in Eq. 4 to approximate the accuracy of the proposed model in identifying hate speech on the Twitter dataset.

$$MSE = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} \left( {y_{i} - \hat{y}_{i} } \right)\,^{2}$$
(2)

Where,

N denoted the quantity of information relative to the predicted value ŷ and labeled y.

$$CEL = \mathop \sum \limits_{c = 1}^{M} y_{{\text{o,c}}} Log \left( {P_{{\text{o,c}}} } \right)$$
(3)

where,

M represents classes and related features,

O denotes the observed value of the particular class-related feature,

P represents the prediction probability value relevant to O,

Log is the logarithmic function, and.

Y gives the output value as binary values of a specific class.

$$L = - \frac{1}{n}\Sigma_{i = 1}^{n} \,\log \,(\hat{Y}_{(i)} )$$
(4)

where,

\(n\) gives the number of classes.

\(y\) denotes the output.

5.2 Performance of popular hate speech detection methods

Most state-of-the-art on hate speech detection used precision, recall, and F1-score for evaluation; others used AUC and accuracy for performance measures due to some imbalanced datasets. Table 11 gives evaluation and performance measures from some state-of-the-art works based on accuracy, precision, recall, F1 score, and AUC. As seen in Table 11, Precision, Recall, and F1-score are the metrics used by most authors as they provide better insights into the prediction than accuracy and AUC. Deep-learning models have outperformed machine-learning models with high-performance metrics, as presented in Table 11.

Table 11 Performance comparison

6 Discussion

Hate speech is an emerging issue in social media sites now days. The identification of hate content is one of the major concern and challenge for the researchers. The proposed article shows a systematic order of state-of-the-art works done so far. Feature extraction methods such as distance metric and multimodal information, especially related to hate content detection, are not used to the best of my knowledge. Both directional models such as RNN and LSTM, and non- directional models such as Transformers ad BERT are utilized in identifying hate content. Although machine learning has shown its growth in the last decade, but NLP has also shown the steepest growth by including the evolutionary models such as BERT and Transformers. The variants of BERT like ALBERT, RoBERTa, DistilBERT etc. are used increasingly in solving real life problems because of their self-attention mechanism. The researchers also use LSTM model as it yields subsequently higher results than BERT on small datasets does. The pros and cons of various models are described in detail in the Sect. 3.3. From the last two years, the metaheuristic optimization algorithms such as Ant Lion optimization, Flame Moth optimization, Seagull optimization are also considered in this area with the promising results.

7 Findings and conclusion

Hate speech attempts to marginalize different classes and groups of persons already in the minority due to their race, language and religion. This article reviewed the most outstanding work on automatic hate speech identification. Firstly, we introduced some state-of-the-art hate speech definitions and analysis on the basis of some specific dimensions. This survey also highlights some of the NLP aspects in this area. There is also a good comparison between hate speech definition and definitions of various hate forms. Then, we presented a taxonomy of automatic hate speech detection, including sub-domains of AI approaches. Metaheuristic algorithms which are very new with context of hate speech detection are also mentioned in this manuscript. Paper also covers various works done in multilingual and multimodal hate speech detection along with various datasets description.

8 Future trends

Our studies recommend some future trends from the following angles:

  • We have explored some standard hate speech datasets along with their key features, classifications, objectives, and types of data format available. Most datasets are available in textual form. Very few datasets like (MultiOFF, MMHS150K) are found on hateful memes. No video dataset is found publicly as per the best of my knowledge. So, creating a new dataset of images and videos can be further seen as a future task. Moreover, numerous analysts look at the significant challenge of the datasets availability as few publicly available datasets exist. Authors do not use them, and if they create a new dataset, they do not publish them, making it too difficult to compare results and conclusions.

  • Choosing informative, independent, and discriminating features are crucial in classification problems. This paper covers commonly used text analysis features for hate classification tasks. Hence, automatic feature engineering for generating specific hate features can be a future aspect.

  • For the last few years, authors have been focusing on multilingual hate speech identification by creating their datasets. But very few labeled datasets are found in non-English languages. Various benchmark models can be applied to non-English labeled datasets also.

  • We have also covered important work for hate speech identification in various languages. Hence, the models that understood only the English language are not efficient in processing the input from different Indian languages [78]. So, building a system for code-mixed languages can be considered a future aspect.

  • Nowadays, emojis are also used to show feelings and attitudes in users minds [36], and they are vital elements in delivering hate or offensive content over social media. Hence, pre-processing emojis text can be seen as a different area so that there can be an improvement in aggression detection.

  • There are significantly fewer works on neutral tagged content [75]. So, devising a new method for handling neutral tagged contents in multi-label datasets in a better way can be considered a future job.

  • In our systematic survey, we tracked that most work portrays techniques, separated features, and models utilized. In any case, it is uncommon to discover jobs with available public repositories. More sharing of code, calculations, measures for feature extraction, and stages can assist the area with advancing rapidly.

  • In this article, some of the metaheuristic optimization approaches are also coined to solve hate speech detection. Apart from the mentioned metaheuristic approaches such as ALO and FMO, Parameter Optimization approach can also be implemented in future for solving the hate content detection.