Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Government agencies have been making for more than a decade considerable efforts and investments for exploiting the capabilities offered by information and communication technologies (ICT), and especially the Internet, to increase citizens’ engagement in their decision and policy making processes. This has lead to a big increase of e-participation research [13] and practice [411]. This first generation of e-participation has been characterised by the development of many ‘official’ e-participation spaces operated by various government agencies, which offered to citizens extensive information on government activities, decisions, plans and policies, e-voting and e-survey tools, and also e-consultation spaces, such as e-forums, where citizens could enter opinions on various topics under discussion, or on other citizens’ opinions. The need for increasing the quality of these e-consultations lead to the development of more structured types of e-forums [1214], which impose the semantic annotation of users’ postings (e.g. as issues, alternatives, pro-arguments, or contra-arguments), and allow only some predefined relations among them (e.g. an alternative can be related only with an issue, etc.). A first evaluation of these more structured types of e-forums has shown that they facilitate and drive a more disciplined, focused and argumentative discussion; however, they are more difficult to use and demanding, so they are appropriate for more knowledgeable and educated citizens’ groups, and might exclude less educated and sophisticated ones.

The outcomes of this first generation of e-participation were much lower than the initial expectations (e.g. [15, 16]). The use of these official e-participation websites by the citizens has been in general limited. Governments expected citizens to make the first step, moving from their own online environments to these official e-participation websites, in order to participate in public debates on various proposed public policies or legislations, getting adapted to the structure, language and rules of these websites, but this happened only to a limited extent. Also, most of the topics discussed there were defined by government and very often did not directly touch citizens’ daily problems and priorities, and were more appropriate for experts. Furthermore, many of the ICT tools they adopted were not sufficiently user-friendly and appropriate for wide citizens’ participation. Gradually it was realized that the design of e-participation spaces ‘for all’ was not an easy task, due to the heterogeneity of real or potential online users with respect to educational level, ICT skills and culture. Another problem was that the methodologies used for e-participation were not scalable, so they could be used for pilot trials, but they were not appropriate for large scale e-participation.

The emergence of Web 2.0 social media offers big opportunities for overcoming the above problems, and proceeding to a second generation of broader, deeper and more advanced e-participation. It allows government agencies to transform their approach to e-participation: instead of hosting it exclusively on their own official e-participation websites, they can exploit popular Web 2.0 social media as well, which attract numerous visitors; also, many of them can attract quite different groups of visitors from the ones usually visiting the official e-participation websites (e.g. with respect to educational level, ICT skills and culture). For this reason Web 2.0 social media have recently started being exploited by government agencies, both for broadening and enhancing their interaction with citizens and for internal coordination and knowledge exchange [1719]. So while previously governments moved towards the creation of more structured e-consultation spaces, as mentioned above, currently they tend to move in the opposite direction and reduce the structure they impose on their interaction with the citizens: instead of inviting the citizens to interact with government in the official e-participation spaces in accordance with their rules and structures, it is now the government that goes to the electronic spaces where citizens prefer to have discussions, create content and collaborate with others. However, government agencies should address successfully many challenges in order to use efficiently Web 2.0 social media for the above purposes. While previously they had to manage a unique e-participation space (e.g. make postings to it, process postings of the citizens, reply to them, etc.), in this new approach they have to manage concurrently many Web 2.0 social media (e.g. publish content to them, retrieve from them data on users’ interactions, such as views, comments, ratings, votes, etc., integrate and process them and draw conclusions, based on these conclusions publish new content in each of them, etc.); this needs much more effort and therefore requires more human and financial resources.

This chapter aims to contribute to addressing these challenges. It presents a methodology for the efficient exploitation of Web 2.0 social media by government agencies in order to broaden and enhance e-participation overcoming the above challenges. It is based on a central platform which enables posting content and deploying micro web applications (termed as ‘Policy Gadgets’-Padgets) to multiple popular Web 2.0 social media simultaneously, and also collecting users’ interactions with them (e.g. views, comments, ratings, votes, etc.) in an efficient manner using their application programming interfaces (API). These interaction data undergo various levels of advanced processing, such as basic processing resulting in the calculation of useful analytics, opinion mining and simulation modelling, in order to provide effective decision and policy making support. The proposed methodology leads to a transformation of the existing government agencies’ single channel approach to e-participation, towards ‘hybrid’ multi-channel approaches, which combine the use of interconnected ‘official’ e-consultation spaces (both unstructured and structured) and Web 2.0 social media. It has been developed in the PADGETS (‘Policy Gadgets Mashing Underlying Group Knowledge in Web 2.0 Media’—www.padgets.eu) research project, which has been supported by the ‘ICT for Governance and Policy Modelling’ research initiative of the European Commission.

The chapter is structured in five sections. In the following Sect. 2 the theoretical background of the proposed methodology is outlined, while in Sect. 3 a description of it is provided. Then in Sect. 4 the core technologies employed are reviewed. The results of a first evaluation are presented in Sect. 5. Finally in Sect. 6 the conclusions are summarized and future research directions are proposed.

2 Theoretical Background

According to [20] (a highly influential paper on the ‘Dilemmas in a General Theory of Planning’) public policy problems tend to change dramatically after the World War II. Previously, they were mainly ‘tame’, this term denoting that they had clearer and more widely accepted definition and objectives, so they could be solved by professionals using ‘first generation’ mathematical methods; these methods aim to achieve some predefined objectives with the lowest possible resources through mathematical optimization algorithms. Though for long time this approach has been successful in solving well defined problems associated with basic needs and problems of society (e.g. creating basic infrastructures) the evolution of the society makes it insufficient. The societies tend to become more heterogeneous and pluralistic in terms of culture, values, concerns and lifestyles, and this makes public policy problems ‘wicked’, this term denoting that they lack clear and widely agreed definition and objectives, and are characterised by high complexity and many stakeholders with different and heterogeneous problem views, values and concerns. In [20] are identified some fundamental characteristics of these wicked problems, which necessitate a different approach than the ones used for the tame problems:

  • There is no definitive formulation of a wicked problem.

  • A wicked problem usually can be considered as a symptom of another ‘higher level’ problem, so defining the boundaries and the level at which such a problem will be addressed is of critical importance.

  • Solutions to wicked problems are not ‘true-or-false’, but ‘good-or-bad’, and this judgement is not ‘objective’, but highly ‘subjective’, depending on the group or personal interests of the judges and their values.

  • Every wicked problem is essentially unique; despite seeming similarities among wicked problems, one can never be certain that the particulars of a problem do not override its commonalities with other problems already dealt with.

  • Wicked problems have no stopping rule, so planners stop for reasons which are external to the problem (e.g. running out of time, or money).

  • Wicked problems do not have an enumerable set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the solution plan.

  • There is no immediate and ultimate test of a solution to a wicked problem, since this requires examination of several types of impacts on numerous persons or groups, and for a long time period.

  • Every solution to a wicked problem is an ‘one-shot operation’; every attempt counts significantly and there is no opportunity to learn by trial-and-error.

For these reasons the wicked problems cannot be solved simply by using mathematical algorithms which calculate ‘optimal’ solutions, since they lack the basic preconditions for this: they do not have clear and widely agreed definition (with each stakeholders’ group usually having a different view of the problem) and objectives that can be used as criteria for evaluating possible solutions. So [20] in the above paper suggest that wicked problems require a different ‘second generation’ approach, which combines public participation in order to formulate a shared definition of it with subsequent technocratic analysis by experts. In particular, its first and fundamental phase is consultation among problem stakeholders, during which discourse and negotiation takes place, aiming to synthesize different views and formulate a shared definition of the problem and the objectives to be achieved. Having this as a base it is then possible in a second phase to proceed to a technocratic analysis by experts using mathematical optimization algorithms for the well defined at that phase problem.

Subsequent research on this participative approach to the solution of public policy problems has revealed that it can be greatly supported by the use of appropriate information systems (e.g. [2123]), which allow problem stakeholders to enter ‘topics’ (meant as broad discussion areas), ‘questions’ (particular issues-problems to be addressed within the discussion topic), ‘ideas’ (possible answers-solutions to questions) and ‘arguments’ (evidence or viewpoints that support or object to ideas). Such a system is termed as an ‘Issue Based Information Systems’ (IBIS), and according to [21] can ‘stimulate a more scrutinized style of reasoning which more explicitly reveals the arguments. It should they help identify the proper questions, to develop the scope of positions in response to them, and assist in generating dispute’. The emergence and rapid penetration of the Internet and the Web 1.0 has created big opportunities for a wide and cost effective application of such ICT-based participative approaches to the solution of public policy problems, and has lead to the development of e-participation. The emergence of the Web 2.0 and the relevant social media creates even more opportunities for a wider and more inclusive application of participative approaches to the solution of public policy problems, which engages more social groups than ever before. It enables a wider and more inclusive synthesis of views of many different and diverse social groups on a public policy problem that government faces, and therefore a better and more balanced and multi-dimensional formulation of a shared definition of the problem and the objectives to be achieved. Therefore adopting such a new e-participation approach exploiting the Web 2.0 can broaden and enhance e-participation, and contribute to better and more socially-rooted acceptable public policies.

In the same direction are the conclusions drawn by [19] from an analysis of cases of successful Web 2.0 use in government that Web 2.0 technologies might have stronger transformational effects on government than previous ICTs, driving significant changes at the organizational, cultural, technological and informational changes. They argue that this strong transformation potential is due to the lower technical know-how requirements, and therefore the lower cost, for both government organizations and individual citizens, that characterises these Web 2.0 technologies in comparison with the previous generations of ICT used in government (e.g. internal systems, Web 1.0 Internet, etc.). These lower requirements for know-how and for human and financial resources allow a much quicker and easier deployment of Web 2.0 based solutions to meet various external and internal communication needs at various organizational units and hierarchical levels of government agencies. The same paper also suggests that government agencies can exploit Web 2.0 for ‘crowdsourcing’ [24, 25], defined as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined large group of people in the form of an open call”, in order to mine fresh ideas from large groups of people for addressing various social needs and problems or for improving public services, transforming radically their ways of interacting with citizens. Also, [15, 26] elaborates the seven basic principles of Web 2.0 proposed by [27] for Internet politics as follows: “the Internet as a platform for political discourse; the collective intelligence emergent from political Web use; the importance of data over particular software and hardware applications; perpetual experimentalism in the public domain; the creation of small scale forms of political engagement through consumerism; the propagation of political content over multiple applications; and rich user experiences on political Web sites”. He suggests that both the research community and government practitioners should take seriously into account the above principles, the opportunities they create and the evolutions they drive in the political domain.

3 Methodology Description

The proposed methodology for efficient exploitation of Web 2.0 by government agencies is based on a central platform, which enables posting policy-related content to multiple social media simultaneously, and then retrieving users’ interactions with it (e.g. views, comments, ratings, votes, etc.), in an systematic and centrally managed machine-supported automated manner through their APIs. It also allows policy makers to create graphically micro-applications, termed as ‘Padgets’ (Policy Gadgets), which can be deployed in many different Web 2.0 social media that allow such applications in order to convey policy messages to their users, interact with them and receive their opinions. It should be noted that the above content and the Padgets to be deployed in several social media can include a link to a relevant e-consultation conducted in the official website of the competent government agency, to be used by citizens having a strong interest in the policy under discussion. Each of the targeted social media will have a different audience, so that we can finally reach various groups of citizens, which are quite different from the ones who visit and use the official government-initiated e-participation websites.

This Padget concept that our methodology is introducing is an extension of the concept of the ‘gadget’ applications in web 2.0, which use services and data from heterogeneous sources in order to create and deploy quickly applications, adapted to the needs of public policy formulation. In particular a Padget is composed of four elements:

  1. 1.

    A policy message associated with a public policy in any stage of its lifecycle (e.g. a policy white paper, a draft policy plan, a legal document under formulation, an EU directive under implementation, etc.), which can include various kinds of information, such as text, images, video, etc.

  2. 2.

    An interface allowing users to interact with the Padget, which may give users the capability to access policy documents, be informed on relevant news, vote on some issues, rate various aspects of the policy, express opinions, upload material, tag other people opinions or content as relevant, etc.

  3. 3.

    Interactions of the users with this policy message in various social media, e.g. blogs, YouTube, wikis, social networks, etc., which are retrieved by the central platform.

  4. 4.

    A decision support module, which performs three levels of processing of these users’ interaction data in order to provide useful information that assists and supports the policymaker for making decisions, and has the architecture shown below in Fig. 1.

    Fig. 1
    figure 1

    Architecture of the decision support module

Content or Padgets can be deployed in many different categories of Web 2.0 social media, such as:

  • Platforms for Communication, such as Blogs, Internet forums, Presence applications, Social networking sites, Social network aggregation sites and event sites.

  • Platforms for Collaboration, such as Wikis, Social bookmarking (or Social tagging) sites, social news and opinion sites.

  • Platforms for Multimedia and Entertainment, such as Photo sharing, Video sharing, Live casting and Virtual World sites.

  • Platforms for News and Information, such as Goggle News, Institutional Sites with high number of visitors (i.e. EU, Human Rights and WWF sites) and newspaper sites.

  • Platforms for Policy Making and Public Participation, such as governmental organisations’ forums, blogs, petitions, etc.

From each category will be chosen the most appropriate social media, taking into account the particular public policy under discussion and the audience we would like to involve in the discussion.

A typical application of the proposed methodology in the policy making processes would be initiated by a policy maker wanting to “listen to society’s input” in order to make decisions about a future policy to be introduced, or possible modifications of an already implemented policy. The process to be followed consists of four steps shown in Fig. 2.

Fig. 2
figure 2

Typical application process of the proposed methodology

  1. 1.

    The policy maker designs a campaign using the platform capabilities through a graphical drag-and-drop user interface similar to the one of existing mash up editors. The policy maker can add content to this campaign (e.g. a short textual description of the policy, a longer text describing it in more detail, a video and a number of pictures) to be published in Web 2.0 social media not allowing the deployment of applications. Also, he/she can formulate a Padget application (including some content and also e-voting and/or e-survey functionalities) to be deployed in social media allowing it. Finally the targeted social media will be defined.

  2. 2.

    The execution of the campaign starts by publishing the above content and deploying the Padget in the defined target Web 2.0 social media using their API.

  3. 3.

    The users of the above social media interact in various ways with the content and the Padget. This means that users access them, see the policy message, vote in favour or against it (e.g. using like/dislike capabilities), rate it, stipulate opinions, add material, etc. The above will be performed in a privacy preserving manner and in accordance with the privacy preferences of each user and the privacy policy specified for the Padget.

  4. 4.

    At the last stage the above interactions of users are retrieved from all these social media, together with relevant analytics provided by them, using their API. Advanced processing of them is performed at the three levels mentioned above and shown in Fig. 1, in order to provide to the policy maker information about the attitudes of the society about the particular policy and the main issues raised (e.g. remarks, advantages, disadvantages, suggestions for improvement). This can be the end of the campaign, or if the policy maker needs more information and insight on the attitudes and opinions of the citizens he/she can go back to step 1 and start a new iteration.

4 Core Technologies

4.1 Social Media Application Programming Interface

It is of critical importance for the proposed methodology the central platform to provide interoperability with many different Web 2.0 social media, enabling both posting and retrieving content from them in a machine-supported automated manner through their API. In order to assess the existing capabilities in this direction were examined in detail the API of the following ten highly popular Web 2.0 social media: Facebook, Youtube, Linkedin, Twitter, Delicious, Flickr, Blogger, Picasa, Ustream and Digg. In particular, for each of them we examined the following characteristics:

  • Available APIs and types of capabilities they provide.

  • Capabilities for pushing content in them through their API, where the term “push” reflects any kind of activity that results in adding some type of content in these platforms, such as posts, photos, videos as well as ratings, requests, approvals, intentions, etc.

  • Capabilities for retrieving content from them through their API, where the term “retrieve” reflects any kind of activity that results in acquiring some kind of information from these platforms representing activities that have occurred in them, such as comments on a post, photo or video, approved requests, manifested intentions, re-publication activities, etc.

  • Capabilities for deploying applications (gadgets/widgets) in their environment and having users interact with them.

From this analysis it has been concluded these Web 2.0 social media have a clear strategy to become more open and public and conform to open API standards. In this scope they provide more and more functionalities through their API for posting and retrieving content, in order to attract third parties to develop applications. The general trend is exposing methods through their APIs that “go deeply” into their innermost functionalities and provide developers with an ever growing set of capabilities. This includes on one hand content push functionality (this content can be text, images, videos or have more complex forms, such as “events”, “albums” etc.). A large portion of the API is dedicated to the creation, uploading, modification and deletion of such content. On the other hand API also provide functionality that supports the direct retrieval of various types of content generated by users, such as “user ratings”, “unique visits” or “retransmissions” (to other nodes of a social network). However, only Facebook and Linkedin allow deploying applications in their environment, while all the other eight examined social media do not. This means that only in these two social media padgets can be deployed, while in the remaining only content (e.g. postings, images, video, tweets, etc.) can be published.

4.2 Opinion Mining

Considerable research has been conducted in the area of opinion mining, defined as the computational processing of opinions, sentiments and emotions found, expressed and implied in text [2833]. Its initial motivation has been to enable firms to analyze online reviews and comments entered by users of their products in various review sites, blogs, forums, etc., in order to draw general conclusions as to whether users liked the product or not (sentiment analysis), and also more specific conclusions concerning features of the product that have been commented (features extraction) and the orientations (positive or negative) of these comments. From this research considerable knowledge has been generated in this area, consisting of methods and tools for addressing mainly three problems:

  1. 1.

    Classification of an opinionated text as expressing as a whole a positive, negative or neutral opinion (document-level sentiment analysis),

  2. 2.

    Classification of each sentence of such a text as objective (fact) or subjective (opinion), and then focus on the latter and classification of each of them as expressing a positive, negative or neutral opinion (sentence-level sentiment analysis),

  3. 3.

    Extraction from a set of opinionated texts about the topic under discussion of the particular features/subtopics commented by the authors of these texts, and for each of them identification of the orientation of the opinions expressed about it (positive, negative or neutral) (feature-level sentiment analysis).

The above methods and tools enable us to analyze the textual feedback on a proposed public policy, which is provided by the users of the social media where we have published messages or deployed padgets concerning this policy, and to draw conclusions on: (a) the general sentiments/feelings of the users on this policy (whether they like it or not), (b) the main particular issues that are raised on this policy and the main aspects of it that are commented, and also the sentiments/feelings (positive, neutral or negative) on each them. These conclusions can be combined with the ones from the analysis of users’ non-textual feedback (e.g. numbers of users who viewed, liked and disliked the message, ratings of it, etc.), so that a more complete picture on the attitudes on this proposed public policy can be formed. It should be noted that for the practical application of the above opinion mining methods it is of critical importance to have sufficient language resources, such as lexicons of ‘polar words’ (i.e. words with positive and negative meaning to be used for classifications of opinions as positive or negative), synonyms and antonyms.

4.3 Simulation Modelling

Law and Kelton [34] define simulation modelling as the research approach of using computer software to model the operation and evolution of “real world” systems. Such a model can be viewed as an artificial world giving the unprecedented opportunity to intervene and attempt to make improvements to the performance of a system, and then estimate the effects of these interventions and improvement on various critical performance variables. As such it is a laboratory, safe from the risks of the real environment, for testing out hypotheses and making predictions [35]. In particular, simulation modelling involves creating a computational representation of the underlying logic and rules that define how the real-life system we are interested in changes (e.g. through differential equations, flow charts, state machines, cellular automata, etc.). These representations are then coded into software that is run repeatedly under varying conditions (e.g., different inputs, alternative assumptions, different structures) calculating the changes of system’s state over time (continuous or discrete) [36]. While other research methods aim to answer the questions “What happened, how and why” (trying to understand the past), simulation modelling aims mainly to answer the question “What if?” (i.e. what will happen if some particular changes of system structure or rules take place, trying to “move forward” into the future).

According to Borshchev and Filippov [37] based on the level of modelling detail/abstraction (we can have modelling with high abstraction/less details, medium abstraction/details or low abstraction/more details) and on the way time is modelled (as continuous or discrete) we can distinguish between four main paradigms of simulation modelling (Fig. 3):

Fig. 3
figure 3

Main paradigms of simulation modelling (Source: Borshchev and Filippov [37])

  1. 1.

    Dynamic Systems (enabling high detail simulation in continuous time and used mainly for technical systems),

  2. 2.

    Discrete Events Modelling (enabling high detail simulation in discrete time),

  3. 3.

    System Dynamics (enabling simulation in medium or high level of abstraction in continuous time),

  4. 4.

    Agent-based Modelling (enabling modelling the behaviour of the individual ‘agents’ forming the system [at various levels of granularity, e.g. citizens, groups, firms, etc.] and then from them the system’s behaviour is derived).

By comparing them we came to the conclusion that Systems Dynamics (SD) [3840] is more appropriate for the analysis of public policies, because this usually requires high level views of complex social or economic systems in continuous time, and also such systems include various individual processes with various types of ‘stocks’ and ‘flows’ among them, which are influenced by a public policy. For these reasons Systems Dynamics has been successfully used in the past for estimating the evolution of a number of critical variables for society under various policy options, such as unemployment, economic development, taxation income, technologies penetration, pollution, poverty, etc. and for the analysis of various types of public policies (e.g. [4144]). It focuses on understanding initially the basic structure of a system (i.e. its main stocks, flows and the variables influencing them) and then based on it estimating the behaviour it can produce (e.g. exponential growth or S-shared growth of the basic variable), and also how this behaviour will change if various structural changes are made.

5 Evaluation

For the evaluation of the proposed methodology ten pilot applications of it were conducted as part of the abovementioned PADGETS project. They concerned multiple social media consultations on the following topics:

  • “Media freedom”,

  • “Corruption”,

  • “Cooperative institutes’ contribution to poverty reduction, employment generation and social integration”,

  • “Tax evasion and fraud”,

  • “European year of citizens and citizenship”,

  • “Employment, entrepreneurship and freedom of speech for European youth”,

(the above six pilot applications were organized and conducted by the Center for eGovernance Development, Slovenia, which was one of the partners of the PADGETS project, in cooperation with Slovenian Members of the European Parliament [MEP]),

  • “Under-representation of women executives in the higher management of enterpriseses”,

  • “Financial crisis in the Southern European countries”,

  • “Exploitation of wind energy”,

(these three pilot consultations were organized and conducted by University of Aegean, partner of the above project, in cooperation with a Greek MEP),

  • “Large-scale implementation of tele-medicine in Piedmont region”,

(this pilot consultation was organized and conducted by Torino Polytechnico, partner of the above project, in cooperation with Piedmont Regional Government).

After the end of these pilot applications, semi-structured focus group discussions were conducted for evaluating them, in which participated the involved personnel of the corresponding government agencies and MEP assistants. There was a wide agreement that the proposed methodology constitutes a time and cost efficient mechanism of reaching wide and diverse audiences, and stimulating and motivating them to think about social problems and public policies under formulation, and also to provide relevant information, knowledge, ideas and opinions. Furthermore, it enables identifying the main issues perceived by citizens with respect to a particular social problem or domain of government activity, and collecting from them interesting ideas on possible solutions and directions of government activity. However, our pilot applications have shown that the above information generated from such multiple social media consultations might not be at the level of depth and detail required by government agencies. So in order to achieve a higher level of detail, and more discussion depth in general, a series of such multiple social media consultations might be required, each of them focused on particular sub-topics and/or participants’ groups. Another risk of this methodology is that it can lead to unproductive discussions among like-minded individuals belonging to the network of the government policy maker who initiated the consultation; such discussions will be characterised by low diversity of opinions and perspectives, low productivity of knowledge and ideas, and in general limited creativity. Therefore for the effective application of the proposed methodology it is of critical importance to build large and diverse networks for these social media consultations; for his purpose we can combine networks of several government agencies, and also politicians, preferably from different political parties and orientations, and also invite additional interested and knowledgeable individuals and civil society organizations. A more detailed description of the process and the results of this evaluation is provided in [45, 46].

Furthermore, from these focus group discussions it has been concluded that this new hybrid multi-channel approach to e-participation in order to be put in practice by government agencies will require significant changes at the organizational, cultural and technological level. First it will necessitate the creation of new organizational units to manage the above new e-participation channels, and also to analyze the large quantities of both structured data (e.g. citizens’ ratings) and unstructured data (e.g. citizens’ postings in textual form) that will be created by them. The personnel of these new units must have specialised skills concerning these electronic modes of communication, and also a quite different culture from the dominant ‘law enforcement’ culture of government agencies. Also, the analysis of the large quantities of unstructured data in textual form that will be collected from the above channels (e.g. hundreds or thousands of postings) cannot be performed manually, as this would require a lot of human resources (increasing the costs) and also long time (which might cause delays in the decision and policy making processes of government agencies); therefore it is necessary to use highly sophisticated technological ICT-based tools that implement complex opinion mining methods. These tools will have to be integrated with the technological infrastructures of the above channels increasing technological complexity; also, the use of these tools is not easy, and requires extensive adaptations and language resources, such as lexicons of polar words, synonyms and antonyms. Furthermore, new processes should be established for the integration of the results and conclusions of the analysis of the above structured and unstructured e-participation channels’ data in the decision and policy making processes. Finally, the government agencies should get accustomed to the style and language of interaction in Web 2.0 social media, and the whole culture that characterises them, which are quite different in comparison with the official e-participation spaces or the other modes of interaction with the citizens.

6 Conclusions

In the previous sections has been presented a methodology for the efficient exploitation of Web 2.0 social media by government agencies for achieving a wider interaction with more and diverse groups of citizens and broadening and enhancing e-participation. It is based on a central platform, which allows publishing content and deploying micro web applications (Padgets) to multiple Web 2.0 social media simultaneously, and also retrieving users’ interactions with them (e.g. views, comments, ratings) in all these social media, in an efficient systematic and centrally managed machine-supported automated manner using their API. This central platform also performs various levels of advanced processing of these interaction data, such as calculation of useful analytics, opinion mining and simulation modelling, in order to extract from them information appropriate for supporting substantially government decision and policy makers. A first evaluation of this methodology in a series of pilot applications gave positive results as to its capabilities and value, and at the same time revealed some critical preconditions for its successful application.

The proposed methodology leads to a transformation of the current government agencies’ approach to e-participation, which is based on the provision to the citizens of a single e-participation channel (i.e. an official e-participation space), into a ‘hybrid’ multi-channel one. This new approach, instead of the ‘one channel for all’ logic of the current approach, uses a series of interconnected e-participation channels with quite different characteristics, levels of structure and target groups:

  1. 1.

    an official highly structured e-participation space (e.g. a structured forum that imposes the semantic annotations of users’ postings, according to a predefined discussion ontology, and allows only some predefined relations among them [1214]), to be used mainly by a small group citizens with good knowledge on the policy under discussion, high education and willingness to spend considerable time and effort for it; the access to it can be controlled and limited to invited persons, such as representatives of main stakeholders and widely recognised experts, or free,

  2. 2.

    an official unstructured e-participation space (e.g. a usual forum) to be used by a wider group of citizens with some knowledge on the policy under discussion, sufficient education for entering in such an e-consultation, and also have some familiarity with such tools and are willing to spend some time and effort for it,

  3. 3.

    and also a system like the one described in the previous sections, which allows exploitation of various Web 2.0 social media for e-participation purposes, by publishing content on the policy under discussion, deploying relevant micro web applications (Padgets), and then retrieving and processing centrally all citizens’ interaction data; this lower structure channel will allow reaching a much wider and diverse group of citizens than the other two channels, who are not familiar with the operation, the style and the language of the abovementioned types of e-consultations, or cannot spend much time for participating in them, or even do not have sufficient knowledge on the policy under discussion.

It should be mentioned that the above channels should be interconnected, so that a user of one of them can easily move to the others, e.g. a citizen who reads some content about a policy under formulation in a Web 2.0 platform, has a first level of interaction with it (e.g. a simple rating of it), and gets interested in it, can be easily be linked to the official e-participation space of the competent government agency, in order to participate in a more structured relevant consultation.

Further research is required for the evaluation of the proposed methodology from different perspectives, in various types of government agencies and for different kinds of policy consultations, which might lead to modifications and improvements of the methodology, its application process and supporting technological infrastructure. Also, our research has focused on the exploitation of social media by government agencies as a means of more intensive ‘external communication’ with their external environment (e.g. with the society—civil society organizations and individual citizens); so further research is required on the exploitation of social media as a means of more intensive ‘internal communication’ among different government agencies (or even among different departments of the same government agency) for the design and implementation of public policies.