Preprint
Article

This version is not peer-reviewed.

Anticipatory AI Governance in the Age of Supercomputing: A Mixed-Methods Multistakeholder in the Basque Country

Submitted:

10 March 2026

Posted:

11 March 2026

You are already at the latest version

Abstract
Artificial Intelligence (AI) is increasingly embedded in public governance, raising questions about how institutions can anticipate its societal implications while safeguarding democratic accountability amid expanding computational infrastructures. This article examines how anticipatory AI governance can be operationalised in the age of super-computing through a mixed-methods multistakeholder approach in the Basque Country (Spain). The study focuses on the city-regional governance setting of Gipuzkoa, a de-volved historical territory with fiscal autonomy and a growing advanced-computing ecosystem centred in Donostia–San Sebastián, where regional initiatives are positioning the Basque Country as an emerging “quantum territory” within Europe’s high-performance and quantum computing landscape, including the installation of IBM Quantum System Two. Methodologically, the study combines action research with three stakeholder groups and a quantitative online survey of citizens (N = 911). The action research engaged six civil society organisations, seven provincial directorates, and eleven municipalities. Results indicate that city-regional administrations can function as labor-atories for public AI governance when policy experimentation is combined with empirical evidence and advanced computational infrastructures. The findings suggest policy recommendations for supercomputing ecosystems, including transparent AI experi-mentation, public-interest data governance, and policy sandboxes linking advanced computing, civic participation, and accountable digital public services.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Government

1. Introduction

Artificial Intelligence (AI) is becoming deeply embedded in contemporary public governance. Across welfare administration, urban management, service delivery, education, health, and public communication, algorithmic systems increasingly shape how institutions process information, make decisions, and interact with citizens [1,2,3,4]. This shift is not merely technical. It reconfigures institutional authority, accountability, and legitimacy in datafied democracies, particularly when automated and generative systems intervene in areas that affect rights, access to services, and everyday forms of state–citizen interaction [1,3,4]. As a result, the central question is no longer whether public administrations will adopt AI, but how they can do so in ways that remain democratically accountable, socially legitimate, and territorially inclusive [5,6,7,8].
Existing scholarship has shown that the public-sector use of AI creates both opportunities and risks. On the one hand, AI may support improved administrative capacity, faster processing, predictive analysis, and new forms of public problem-solving [2,4,9]. On the other hand, empirical evidence from automated decision-making systems demonstrates that these gains can come at the cost of opacity, bias, due-process failures, and weakened contestability [5,6,10,11]. Burrell’s classic analysis of algorithmic opacity remains especially relevant because it demonstrates that the problem is not only that machine-learning systems are technically complex, but also that they become socially difficult to interpret, explain, and govern [8,10]. In public administration, this opacity is compounded by procurement asymmetries, outsourcing arrangements, and institutional dependence on external technological infrastructures, which may weaken democratic oversight [11,12]. Cases such as Robodebt in Australia and SyRI in the Netherlands illustrate the consequences of poorly governed automation: administrative harm, erosion of public trust, and violations of rights protections [5,7,13].
These concerns have intensified with the expansion of generative AI and the infrastructural growth of advanced computational systems. Public institutions are increasingly confronted not only with software applications but with wider ecosystems of compute, data, models, and standards that shape what AI can do, who controls it, and how its risks are distributed [14,15,16,17]. In Europe, these developments are connected to broader policy agendas on AI adoption, digital sovereignty, digital public infrastructure, and strategic competitiveness [14,18,19,20,21]. The rise of high-performance computing and quantum infrastructures adds a new layer to this debate. Supercomputing expands the scale, speed, and potential reach of AI systems, making governance questions more urgent rather than less. If advanced computational infrastructures are to serve the public interest, their societal implications must be addressed before technological lock-in, institutional drift, or democratic harm become entrenched.
This article addresses these issues through the concept of anticipatory AI governance. Anticipatory governance refers to the capacity of institutions to act under conditions of uncertainty by combining foresight, reflexivity, experimentation, and iterative learning [22,23]. Rather than responding to technological harms only after they materialise, anticipatory governance seeks to build the institutional capability to identify plausible futures, test interventions, and adapt policy in advance of full-scale deployment. This approach is particularly relevant to AI because AI systems evolve rapidly, diffuse unevenly, and generate contested effects across social groups and territories [6,9,24]. In democratic settings, anticipatory governance is not simply about technical preparedness. It is about ensuring that institutions remain capable of steering socio-technical change in ways that preserve public value, accountability, and citizen trust.
Recent work in mission-oriented and market-shaping public policy reinforces this argument. Mazzucato has shown that the state should not be treated merely as a reactive regulator correcting market failures, but as an active shaper of socio-technical transformation [25]. Extending this line, Mazzucato and Kattel argue that public-sector capacities and capabilities are central to steering innovation under uncertainty, especially when governments must align new technologies with collective goals rather than narrow efficiency logics [17,26]. Applied to AI, this means that governance cannot be reduced to abstract ethics principles, ex post audits, or compliance checklists alone. Instead, anticipatory AI governance must be understood as a problem of institutional capacity: the ability of public administrations to shape, not merely absorb, the trajectories of AI adoption [25,26,27,28].
A further reason for focusing on anticipatory AI governance lies in the territorial unevenness of digital transformation. AI is often discussed at national, supranational, or corporate scales, yet it is frequently deployed, experienced, and contested locally. Digital divides, administrative capabilities, civic infrastructures, and trust relations vary sharply across territories, shaping how algorithmic systems are introduced and with what consequences [27,28]. Regional institutions therefore matter because they mediate the relationship between technological infrastructures and public outcomes. Caragliu and Del Bo show that regional institutions significantly influence the urban digital divide [12], while Caragliu, Mora, and Appio argue that AI governance is inseparable from territorial innovation systems and smart urban futures [13]. These insights suggest that subnational governments are not peripheral actors in AI governance. They are key sites where the distributive and democratic effects of AI are negotiated in practice.
The Basque Country offers a particularly productive setting in which to examine these dynamics. More specifically, Gipuzkoa constitutes a distinctive city-regional governance context: a historical territory with fiscal autonomy, a strong institutional tradition of policy experimentation, and an increasingly dense advanced-computing ecosystem (Figure 1). This territorial specificity matters. As previous work on small nations, territorial governance, and the Basque case has shown, devolved institutions may have particular capacities to connect innovation, public policy, and place-based governance [29]. In Gipuzkoa, these capacities now intersect with a growing AI and supercomputing landscape centred on Donostia–San Sebastián, including the recent installation of IBM Quantum System Two and related efforts to position the Basque Country as an emerging “quantum territory” within Europe’s high-performance and quantum computing landscape [15,30] (Figure 2). This combination of devolved governance and advanced computational infrastructure makes Gipuzkoa a distinctive case for analysing how anticipatory AI governance can be operationalised at the subnational scale.
Yet the presence of supercomputing infrastructures does not automatically translate into social value [24]. Advanced computing may reinforce existing asymmetries unless institutions deliberately connect technological capacity with democratic purpose, public-interest data governance, and inclusive policy design. This is where digital inclusion becomes central. Digital inclusion is no longer reducible to access, connectivity, or individual skills. In increasingly automated public environments, inclusion also concerns participation, intelligibility, contestability, and the ability of citizens to understand and challenge algorithmically mediated outcomes [27,31,32]. Recent work on democracy and digital technologies in the Basque context similarly shows that institutional innovation must be linked to civic participation and democratic culture if digital transformation is to remain socially legitimate [33]. In this sense, anticipatory AI governance and digital inclusion are mutually constitutive: governance without inclusion risks technocratic drift, while inclusion without governance risks remaining normatively appealing but institutionally weak.
This article therefore proceeds from the hypothesis that mixed methods are necessary to test how the social impact of supercomputing should be addressed through anticipatory AI governance. The reason is straightforward. The social implications of AI and supercomputing are neither purely technical nor fully observable through a single method. They unfold across institutional processes, stakeholder perceptions, policy experimentation, and citizen attitudes. A mixed-methods design is therefore required to capture these multiple dimensions simultaneously. In this article, that design combines action research with a longitudinal online survey. Action research is especially appropriate because it embeds inquiry in ongoing institutional processes, enabling iterative learning, co-production, and reflexive engagement with public actors [34]. At the same time, mixed-methods research offers a well-established framework for integrating qualitative and quantitative evidence in the study of complex social phenomena [35,36]. For the purposes of this study, the combination of multistakeholder action research and citizen online survey evidence allows the analysis to move from institutional dynamics to broader societal perceptions of AI governance.
The empirical focus of the article is an action-research process developed in Gipuzkoa between 2025 and 2026 and connected to a Digital Inclusion Strategy promoted by the Human Rights and Democratic Culture Directorate of the Provincial Council. This research design involved action research and an online survey. (i) Regarding action research, three stakeholder groups were considered: six civil society organisations, seven provincial directorates, and eleven municipalities. In addition to action research, a quantitative online survey of citizens in Gipuzkoa (N = 911) was deployed in parallel. This results in a mixed-methods design that makes it possible to examine anticipatory AI governance not only as a theoretical proposition but as a practical governance architecture emerging through interaction between public institutions, civic actors, and territorial administrations.
This article therefore proceeds from the hypothesis that mixed methods are necessary to test how the social impact of supercomputing should be addressed through anticipatory AI governance. The reason is straightforward. The social implications of AI and supercomputing are neither purely technical nor fully observable through a single method. They unfold across institutional processes, stakeholder perceptions, policy experimentation, territorial capacities, and citizen attitudes. A mixed-methods design is therefore required to capture these multiple dimensions simultaneously. For the purposes of this study, the combination of multistakeholder action research and citizen online survey evidence allows the analysis to connect four interrelated perspectives: civil society organisations and NGOs, institutional departments within the Provincial Council, municipalities across the territory, and citizens as the wider public affected by AI-driven governance. This multistakeholder framework is methodologically necessary because anticipatory AI governance can only be properly assessed when institutional, territorial, civic, and societal dimensions are examined together.
The article addresses the following research question: How can city-regional governments operationalise anticipatory AI governance in the age of supercomputing through a mixed-methods multistakeholder framework involving civil society organisations, institutional departments, municipalities, and citizens, while safeguarding democratic accountability and digital inclusion?
This research question responds to three gaps in the literature. First, while AI governance scholarship has expanded rapidly, much of it remains focused on national strategies, global principles, or sector-specific regulation, with less attention to how anticipatory governance is institutionalised in subnational settings [1,6,9]. Second, discussions of supercomputing and quantum infrastructures often privilege competitiveness, innovation, and technological sovereignty, but devote less attention to their social implications and governance requirements at the territorial scale [14,15,17,21]. Third, digital inclusion research has not always fully engaged with AI as a structural driver of exclusion, despite growing evidence that automated systems can reproduce inequalities through biased data, opaque decision pathways, and uneven institutional capacities [10,27,28,31].
In response, this article makes three contributions. First, conceptually, it connects anticipatory governance and digital inclusion in order to frame AI governance as a question of territorial public capability rather than narrow compliance [22,25,26]. Second, empirically, it provides a city-regional case of how anticipatory AI governance can be operationalised in a devolved territory shaped by both civic participation and advanced computational infrastructures [15,29,30]. Third, methodologically, it demonstrates why a mixed-methods multistakeholder approach is necessary to understand the social impact of supercomputing, since such impacts emerge through the interaction of infrastructures, institutions, and publics rather than through technological deployment alone [34,35,36].
At a broader level, the article also engages critical debates about the politics of AI. Recent scholarship has warned against technocratic, solutionist, or ideological framings that present AI as inherently progressive while neglecting power asymmetries, labour disruptions, environmental trade-offs, and democratic risks [24,37,38]. In urban and regional contexts, the danger is that supercomputing and AI adoption become attached to prestige narratives or competitiveness agendas while bypassing deeper questions of accountability, redistribution, and inclusion. Anticipatory AI governance offers an alternative only if it remains grounded in civic participation, institutional reflexivity, and public-interest infrastructures. Hence, the significance of the Gipuzkoa case is not simply that it hosts advanced computing assets, but that it offers a setting in which to test whether territorial governance can connect supercomputing capacity with democratic experimentation and socially grounded policy learning.
The principal argument advanced here is that city-regional administrations can function as laboratories for public AI governance when policy experimentation is combined with empirical evidence, multistakeholder engagement, and advanced computational infrastructures. In such contexts, anticipatory AI governance is most effective when it is embedded in territorial institutions, linked to digital inclusion, and informed by mixed-methods evidence capable of capturing both institutional processes and citizen-level perceptions. Supercomputing’s social impact, in other words, should not be treated as an externality to be managed later, but as a governance question to be addressed from the outset through transparent experimentation, public-interest data governance, and accountable digital public services.
The remainder of the article is structured as follows. Section 2 reviews the literature on anticipatory governance, mixed-methods, and AI and supercomputing’s social impact. Section 3 presents the methods, explaining the mixed-methods design and the integration of action research with the longitudinal online survey. Section 4 reports the results from the three stakeholder groups and the citizen survey. Section 5 discusses the findings in relation to anticipatory AI governance in the age of supercomputing. Section 6 concludes by summarising the main contributions, outlining the limitations of the study, and identifying future research avenues.

2. Literature Review

This section develops the analytical framework of the article by connecting three literatures that are often discussed in parallel rather than in combination: (i) anticipatory AI governance, (ii) mixed-methods multistakeholder research, and (iii) the social impact of AI and supercomputing. The purpose is not to repeat the general problem statement already set out in the Introduction, but to clarify the conceptual terrain within which the article is positioned and to identify the specific gap it addresses. The central claim is that anticipatory AI governance in the age of supercomputing cannot be understood only through ethical principles, legal safeguards, or innovation strategies. It must also be approached as a territorial and institutional process in which public administrations, infrastructures, organised stakeholders, and citizens interact under conditions of uncertainty, asymmetrical capability, and uneven exposure to risk.
Three propositions organise the review. First, anticipatory AI governance is best understood as a public capability for acting under uncertainty, rather than as a rhetorical commitment to future-oriented thinking [14,15,17,18]. Secondly, that capability cannot be adequately studied through a single methodological lens, because AI governance unfolds across heterogeneous sites: policy design, administrative practice, civic mobilisation, territorial implementation, and public perception [21,33,34]. Thirdly, the social impact of AI and supercomputing is not an external consequence to be assessed only after deployment. It is the substantive field in which questions of rights, inclusion, labour, sovereignty, infrastructure, sustainability, and democratic legitimacy are negotiated [24,25,26,27,28,29,36,37,38]. Taken together, these literatures justify the article’s decision to examine anticipatory AI governance through a mixed-methods multistakeholder framework in a devolved city-regional setting.

2.1. Anticipatory AI Governance

Recent scholarship on AI governance has increasingly moved from general ethical concern toward more operational questions about institutional design, administrative capacity, implementation, and public legitimacy [1,2,3,4,9,11]. This shift is significant because public-sector AI differs from many private-sector applications in one decisive respect: it frequently intervenes in domains where decisions affect rights, duties, eligibility, recognition, and access to public services. As a result, public AI governance cannot be reduced to technical performance, innovation rhetoric, or compliance checklists. It must also address accountability, contestability, human oversight, and the conditions under which institutional authority remains democratically legitimate [6,9,11,31].
Within this field, anticipatory governance provides a particularly useful lens because it foregrounds the problem of acting before institutional failure becomes visible. Anticipatory governance has been defined as the ability to govern under uncertainty by embedding foresight, experimentation, reflexivity, and iterative learning into public decision-making [14,15]. Rather than treating technological change as an exogenous force to which institutions merely respond, it treats public institutions as actors capable of preparing for, shaping, and revising socio-technical trajectories. This is especially pertinent in the AI domain, where systems evolve rapidly, infrastructures are increasingly complex, and the societal effects of deployment often become apparent only once routines and dependencies have already hardened [25,26,27,28,29].
For this reason, anticipatory AI governance should be seen not as a specialized subfield of ethics, but as a mode of institutional organisation. Its focus lies less on enumerating principles than on asking whether public institutions possess the capacities needed to steer technological adoption in the public interest. These capacities include strategic foresight, internal coordination, regulatory interpretation, procurement judgement, stakeholder engagement, and the ability to revise or halt deployments when harms, uncertainties, or legitimacy deficits become evident [17,18,19,29]. In that sense, anticipation is linked directly to state capacity.
Mission-oriented and market-shaping theories of the state reinforce this argument. Mazzucato’s work challenged the view that the state should be seen only as a corrective actor intervening after markets fail, instead highlighting its role in direction-setting, coordination, and public-value creation [17]. Mazzucato and Kattel extend this line of thought by showing that public-sector capacities and capabilities are essential when innovation trajectories are uncertain and socially consequential [18]. AI governance, from this perspective, is not only about limiting harmful uses of technology. It is about whether public institutions can shape the trajectories of AI adoption in ways that align with inclusion, accountability, and democratic legitimacy. This is particularly relevant in contexts where the pressure to adopt AI is driven by competitiveness agendas, vendor logics, or technological prestige rather than by clearly articulated public purposes [18,19,25].
At the same time, anticipatory governance should not be treated as normatively unproblematic. A critical literature warns that anticipatory practices may drift into technocratic managerialism if they are not anchored in public deliberation and democratic accountability [16,22,35]. Coeckelbergh and Sætra describe the tension between technocracy and democracy in AI-mediated governance, arguing that reliance on AI may weaken human agency when political judgement is displaced by computational authority [39]. Cugurullo’s notion of “AIdeology” similarly shows how AI can be framed as inevitable, progressive, and solutionist in ways that depoliticise infrastructure choices and narrow the range of imaginable alternatives [35]. In parallel, work on anticipatory urban governance suggests that generative AI may begin to function as an “oracle”, encouraging forms of decision-making that appear neutral or intelligent while obscuring underlying political assumptions [40]. This critical strand is important because it reminds us that anticipation must not be confused with neutral foresight; it is always shaped by who anticipates, with what tools, for whose benefit, and under what institutional conditions [41].
Another major debate concerns scale. Much of the most visible AI governance literature focuses on national strategies, supranational frameworks, or global debates. Yet public AI is often operationalised at subnational and municipal levels, where administrative capacity, citizen trust, and territorial inequalities differ substantially [12,13,31,42,43,44,45,46]. The significance of these territorial scales is not merely practical. It is analytical. Regional and municipal institutions mediate how AI enters public services, how infrastructures are organised, how civic actors are engaged, and how risks are distributed across populations [12,13]. Recent work on municipal AI governance underscores that durable governance arrangements must be built “from the ground up,” through organisational learning, cross-departmental coordination, and context-sensitive oversight [46]. This is especially important for the present article because Gipuzkoa is not simply a locality but a devolved historical territory with distinctive fiscal powers, institutional capacity, and innovation infrastructures [29]. Anticipatory AI governance in such a setting is therefore neither exclusively local nor national; it is territorial and multi-scalar.
Accordingly, the most useful way to conceptualise anticipatory AI governance for this article is as a territorial public capability: a capacity to align foresight, innovation, institutional coordination, and democratic accountability across actors and scales. This framing makes it possible to move beyond abstract ethical discourse and toward the practical question of how anticipatory governance is institutionalised in devolved public administrations facing the opportunities and pressures associated with AI and supercomputing.

2.2. Mixed-Methods to Shed Light on AI’s and Supercomputing’s Social Impact

If anticipatory AI governance is a problem of governing uncertainty, then the question of method becomes central rather than peripheral. One of the main limitations in the existing literature is that AI governance is often studied through segmented lenses: some studies focus on law and regulation, others on public-sector implementation, others on citizen attitudes, and still others on technical risk. These approaches all generate valuable insights, but they often isolate dimensions of a phenomenon that is, in practice, distributed across multiple institutional and social locations. A key premise of this article is therefore that AI governance cannot be adequately understood through a single methodological register.
Mixed-methods research offers an especially appropriate response to this challenge. Creswell and Plano Clark define mixed methods as the systematic integration of qualitative and quantitative evidence in order to provide a fuller understanding of a research problem than either approach could yield independently [33]. Tashakkori and Teddlie similarly argue that mixed-methods designs are particularly useful where social processes are complex, multi-layered, and resistant to single-mode explanation [34]. AI governance clearly fits this description. It involves formal institutions and informal practices, administrative routines and public narratives, measurable attitudes and situated experiences. Its effects are dispersed, and the meanings attached to them vary by actor and level.
The methodological issue at stake is therefore not simply that “more methods” provide more data. Rather, different methods make different dimensions of the governance problem visible:
Qualitative inquiry is especially useful for tracing organisational processes, identifying coordination problems, understanding how actors frame risks and opportunities, and documenting how policy innovation is negotiated in practice. In this article, action research is used to examine how stakeholders address the intertwined challenges of AI and supercomputing particularly from the operational perspective of digital inclusion and Generative AI through four analytical blocks: (i) digital divide, (ii) algorithmic bias, (iii) data divide, and (iv) digital futures. This qualitative strand produced a Digital Inclusion and Generative AI Decalogue, co-developed between March and July 2025 with six NGOs stemming from the NGO network Aniztasunaren Sarea, whose member organisations work with structurally excluded groups across Gipuzkoa, including migrants, women, people with disabilities, Roma communities, and asylum seekers [23] (Figure 3). Structured around ten dimensions, the Decalogue is applied as an analytical tool to six civil society organisations/NGOs, seven directorates within the Gipuzkoa Provincial Council, and eleven municipalities (ElkarBizi network). It functions simultaneously as a diagnostic instrument and as a roadmap for action research, making visible how stakeholders perceive access barriers, algorithmic discrimination, data governance, statistical invisibility, digital-rights gaps, and the collective imagination of just digital futures. In this sense, the Decalogue embeds civil society knowledge into the empirical design of the study and provides a territorially grounded framework for analysing anticipatory AI governance in practice [24].
Quantitative inquiry is valuable for mapping how perceptions, concerns, trust, and expectations are distributed across a broader population. The online survey conducted with 911 citizens in Gipuzkoa was designed to assess how AI is being integrated into daily life and how its social implications are perceived across the territory. The results indicate both widespread uptake and conditional legitimacy. While 74% of respondents report having used some form of AI tool, and 62% of users do so at least once a week, this diffusion is accompanied by notable concerns regarding dependency, privacy, and data protection, particularly in public-sector applications. The survey also reveals significant age-based disparities, with usage highest among younger respondents and considerably lower among older groups, suggesting that the key divide is less attitudinal than practical, namely unequal digital capability. In the public-administration domain, respondents generally support the use of AI for automating repetitive tasks and document management but show a clear preference for maintaining human interaction in consequential procedures. The survey therefore complements the action-research component by providing population-level evidence that AI in Gipuzkoa is perceived not as a technology to be rejected, but as one whose legitimacy depends on transparency, safeguards, sustainability, and the preservation of human-centred public services.
A mixed-methods design becomes analytically powerful when these two forms of evidence are not merely juxtaposed but integrated around a common problem: how anticipatory AI governance can be operationalised in a territorially grounded manner that connects institutional experimentation with societal perceptions and lived experiences. In this study, the qualitative strand captures the organisational and participatory dynamics through which public institutions, civil society organisations, and municipalities negotiate the challenges of digital inclusion, data governance, and algorithmic accountability. The quantitative strand, by contrast, provides a broader societal perspective, revealing how citizens perceive AI adoption, its opportunities, and its risks across the territory. When analysed together, these two sources of evidence allow the research to bridge micro-level governance processes with macro-level social attitudes. This integration makes it possible to identify both institutional capacity gaps and societal expectations regarding AI and supercomputing infrastructures. Consequently, the mixed-methods approach does not merely triangulate data; it enables a multi-scalar understanding of AI’s and supercomputing’s social impact, linking participatory governance processes, public policy experimentation, and citizen perceptions within a single analytical framework.
Action research is particularly well suited to the study of anticipatory AI governance because it treats inquiry as part of an iterative process of problem-solving, reflection, and institutional change [21]. Lewin’s classic formulation remains relevant precisely because governance innovation is rarely a stable or finished object. Public institutions experimenting with AI are learning in real time, revising organisational practices, and encountering new tensions as technologies, regulations, and public expectations evolve [21]. Action research is therefore especially appropriate where the goal is not only to observe governance, but to understand how governance capacity is built through interaction, feedback, and institutional learning.
However, action research on its own can also be limited. One of its risks is that it may privilege organised institutional and stakeholder perspectives while leaving aside wider societal patterns of concern, expectation, or uncertainty. This matters in AI governance because many of the most significant legitimacy questions concern how citizens perceive and experience digital transformation, not only how institutions design it. For this reason, the combination of action research with citizen online survey evidence is analytically important. It allows the researcher to connect institutional process with societal perception, thereby moving beyond elite or organisational perspectives alone [33,34,47].
The multistakeholder dimension further strengthens this design. In the present article, the mixed-methods framework is organised around four interrelated perspectives: civil society organisations and NGOs, institutional departments within the Provincial Council, municipalities across the territory, and citizens as the broader public affected by AI-driven governance. This architecture reflects the governance problem itself that was initially reflected in the II Diversity Plan of the Directorate of Human Rights and Democratic Culture of the Provincial Council of Gipuzkoa. Civil society organisations may identify exclusion, vulnerability, and rights-based concerns that are invisible in formal policy discourse. Institutional departments reveal how responsibilities are distributed within the administration, where capacity gaps emerge, and how different units frame AI as opportunity, risk, or both. Municipalities illuminate territorial unevenness in implementation and capability. Citizens provide a wider picture of perception, trust, uncertainty, and acceptance. These are not redundant viewpoints. They are complementary ways of knowing the same socio-technical transformation [48,49,50,51].
This multistakeholder orientation is supported by the literature on co-production and participatory governance. Ostrom’s work demonstrated that public value is often generated through interaction between institutions and the publics affected by policy, rather than through top-down provision alone [52]. More recent work on civic and participatory AI extends that insight by suggesting that AI governance acquires legitimacy when citizens and affected groups are not merely consulted ex post, but are included in defining what counts as harm, fairness, acceptable risk, or trustworthy use [51,52]. This is especially important because many harms associated with AI—statistical invisibility, accessibility barriers, discriminatory profiling, or interpretive opacity—are often first experienced and articulated outside formal institutional settings.
The multistakeholder framework thus serves both an epistemic and a democratic function. Epistemically, it widens the range of evidence available to the researcher and reveals mismatches between institutional narratives and lived realities. Democratically, it aligns the research design with the underlying claim of anticipatory governance: that governing emerging technologies requires reflexivity, participation, and plural forms of knowledge. In this sense, mixed-methods multistakeholder research is not only a methodological preference. It is an epistemic infrastructure for studying anticipatory governance in practice.
This is especially relevant in the age of supercomputing [53,54]. Computational infrastructures may be highly visible as symbols of innovation, regional ambition, or technological sovereignty, while their social implications remain unevenly distributed and only partially legible within official discourse. A mixed-methods multistakeholder design allows these uneven effects to be traced across institutions, territories, and publics. It therefore provides the empirical basis for assessing whether anticipatory AI governance is being enacted as a genuinely inclusive and democratic public capability, rather than simply as a strategic or administrative discourse.

2.3. AI’s and Supercomputing’s Social Impact

The third strand of the literature concerns the substantive terrain on which the previous two debates converge: the social impact of AI and the computational infrastructures that sustain it. Policy discourse frequently frames AI and supercomputing in terms of innovation, productivity, competitiveness, or strategic leadership [25,26,27,28,29]. These are undeniably important dimensions, especially in regional development and industrial policy. Yet they do not exhaust the matter. A growing body of scholarship argues that the effects of AI are fundamentally social and political because they reshape rights, labour, infrastructure, governance capacity, and the conditions of democratic life [24,36,37,38].
At the most immediate level, social impact concerns bias, discrimination, opacity, and unequal exposure to harm. Research in law, ethics, and governance has shown that harms associated with AI do not arise only from flawed models. They also arise from the institutional contexts in which those models are deployed: settings where accountability is fragmented, data are incomplete, redress mechanisms are weak, or affected groups are underrepresented [42,43,44,45]. In public-sector contexts, such failures are especially consequential because they may affect welfare allocation, legal recognition, eligibility, risk classification, or access to services. This means that questions of fairness, intelligibility, privacy, and contestability are not peripheral to social impact; they are central to it.
A second strand of the literature shifts attention from discrete systems to the infrastructures that make AI possible. The growing interest in digital public infrastructure is particularly important here. This literature treats digital infrastructures not as neutral technical backdrops, but as foundational arrangements through which identity, interoperability, data exchange, and service delivery are organised [53,54]. AI systems rely on these infrastructures for their operation, and their governability depends in large part on whether those infrastructures are publicly oriented, transparent, and institutionally governable. If the infrastructural layer is fragmented, externally controlled, or weakly regulated, then even apparently well-designed AI systems may generate exclusion, opacity, or dependency.
This infrastructural perspective connects directly to debates on data governance and sovereignty. Data commons scholarship argues that the central issue is not only how data are used, but who controls access, reuse, and governance arrangements [55]. Indigenous data sovereignty literature extends this argument by showing that data governance is collective, historical, and political, not merely technical or individualised [56]. More broadly, work on digital sovereignty suggests that public institutions must grapple with dependencies in compute, cloud, standards, and platforms if they are to retain meaningful autonomy in the governance of AI. These questions become especially important in the age of supercomputing, where advanced compute capacity may promise local opportunity while simultaneously deepening infrastructural concentration and external dependence.
Supercomputing intensifies the social impact question precisely because it expands capability. High-performance and quantum infrastructures increase the scale, speed, and range of tasks that AI systems can perform. They may enhance scientific research, simulation, optimisation, and sophisticated public services. Yet they also raise questions about who benefits, who participates, what capacities are built locally, and under what conditions advanced compute becomes publicly accountable [26,27,57,58]. In other words, supercomputing is not a neutral technological upgrade. It is an infrastructural condition that may either widen or narrow democratic and territorial capacity, depending on how it is governed.
A third literature situates AI within broader political economy. AI and advanced computing are reshaping labour markets, skill needs, industrial trajectories, and regional development pathways [57,58,59,60,61,62,63]. The language of creative destruction remains useful insofar as it captures the disruptive and generative dynamics of innovation [57,58]. Yet the relevant question for public governance is not only whether disruption occurs, but how it is distributed and governed. Recent work on labour-market effects and skill mismatches suggests that the benefits of AI are unlikely to be evenly shared unless they are accompanied by reskilling, public investment, and inclusion-oriented territorial strategies [59,60,61,62]. In devolved settings such as Gipuzkoa, where institutional capacity is relatively strong but municipal and social conditions remain uneven, these political-economic questions are inseparable from the governance of AI and supercomputing [60].
A fourth and increasingly important strand concerns environmental sustainability and democratic life. AI systems are material and energy-intensive; they rely on extractive infrastructures and may generate rebound effects that complicate simplistic efficiency narratives [64,65,66]. Recent work on AI’s environmental footprint, sustainability trade-offs, and ecotechnopolitical consequences suggests that governance must address not only what AI enables, but what it consumes and displaces [36,64,65,66]. At the same time, democratic scholarship emphasises that digital transformation affects trust, participation, legitimacy, and the capacity of publics to influence collective futures [38]. In this sense, social impact is not reducible to immediate service outcomes or innovation indicators. It includes longer-term effects on justice, accountability, social cohesion, and democratic agency.
Taken together, these literatures suggest that AI’s and supercomputing’s social impact is multi-dimensional [67,68,69]. It includes immediate harms related to rights, opacity, and exclusion; infrastructural issues related to data governance and public capacity; political-economic effects on labour and territorial development; and ecological and democratic consequences that unfold over longer timescales. This broader understanding of social impact is crucial for the present article because it explains why anticipatory AI governance cannot be separated from mixed-methods multistakeholder analysis. If the effects of AI and supercomputing are distributed across institutions, territories, and publics, then the assessment of those effects must also be distributed across corresponding forms of evidence and participation [70,71,72,73,74,75].
In sum, this section has established the conceptual architecture for the remainder of the article. Anticipatory AI governance provides the institutional lens through which future-oriented public capacity can be understood. Mixed-methods multistakeholder research provides the epistemic strategy required to examine that capacity across actors and scales. The literature on AI’s and supercomputing’s social impact specifies the substantive field in which these questions matter. The following section translates this framework into a research design capable of examining how anticipatory AI governance is operationalised in practice.

3. Mixed Methods: Qualitative Action Research and Quantitative Online Survey

This article adopts a mixed-methods design, combining qualitative action research with a quantitative online survey, in order to examine how anticipatory AI governance can be operationalised in a city-regional setting shaped by supercomputing, digital inclusion challenges, and multilevel public governance (Table 1). The choice of method follows directly from the research question. If the social impact of AI and supercomputing emerges through the interaction between infrastructures, institutions, organised stakeholders, and citizens, then no single source of evidence is sufficient to capture the phenomenon in full. A mixed-methods design is therefore used to connect organisational processes and policy experimentation with wider societal perceptions and everyday experiences [21,33,34].
The qualitative component focuses on action research conducted between 2025 and 2026 in Gipuzkoa and linked to the Digital Inclusion Strategy stemming from an International Summer School [23]( https://www.uik.eus/en/activity/digital-inclusion-generative-artificial-intelligence-gipuzkoa-socially-cohesive-digitally) promoted by the Human Rights and Democratic Culture Directorate of the Provincial Council. This strand examines how three stakeholder groups—civil society organisations/NGOs, provincial directorates, and municipalities—interpret and negotiate the challenges posed by digital inclusion and Generative AI. The quantitative component complements this perspective through an online survey of 911 citizens in Gipuzkoa (technically assisted through the company Ikerfel), designed to assess how AI is being incorporated into daily life and how its opportunities, risks, and acceptable uses are perceived across the territory. Together, both strands allow the study to connect institutional experimentation with broader public attitudes, thereby offering a multi-scalar analysis of anticipatory AI governance in practice.

3.1. Qualitative: Action Research with Three Stakeholders Group Through an Analytical Decalogue

Action research was selected because the object of analysis—anticipatory AI governance—is not a stable institutional arrangement but an evolving governance process in which public administrations, civil society organisations, and territorial actors experiment with new policy responses under conditions of uncertainty. Action research is particularly suitable in such contexts because it combines inquiry with institutional learning and participatory problem-solving, allowing researchers and stakeholders to co-produce knowledge through iterative engagement [22].
The action research process was organised around three stakeholder groups that reflect different governance levels and perspectives within the territory:
(1) six civil society organisations and NGOs connected to the Aniztasunaren Sarea network,
(2) seven directorates within the Provincial Council of Gipuzkoa, and
(3) eleven municipal administrations (Elkar Bizi network) across the territory.
To structure the qualitative analysis across these groups, the project co-produced with six NGOs an analytical Decalogue on Digital Inclusion and Generative AI. The Decalogue was co-produced through workshops and consultations conducted between March and July 2025 and served as the principal analytical framework for examining how stakeholders perceive and respond to the governance challenges associated with AI, data infrastructures, and digital inclusion. The decalogue was culminated in the International Summer School celebrated in July 15-16 2025 with the active participation of the six NGOs representatives (Figure 4).
The Decalogue organises the analysis into four thematic blocks:
  • 1. Digital Divide
  • 2. Algorithmic Bias
  • 3. Data Divide
  • 4. Digital Futures
Table 2. Decalogue: Description.
Table 2. Decalogue: Description.
Block Dimension Description Policy Relevance
1. Digital Divide 1. Access & Obstacles Persistent connectivity barriers affecting migrants, rural residents, and precarious users when accessing digital public services. Highlights infrastructure and accessibility gaps requiring targeted territorial policies.
2. Digital Literacy & Autonomy Digital inclusion requires cognitive capacity to navigate, interpret, and critically assess digital systems and online information. Emphasises training, empowerment, and algorithmic literacy as central components of inclusion strategies.
3. Devices, Mobile & Connectivity High dependence on mobile devices among vulnerable users due to affordability constraints and limited access to alternative infrastructures. Demonstrates the importance of device accessibility and affordable connectivity policies.
2. Algorithmic Bias 4. Experiences of Digital/Algorithmic Discrimination Cases of automated refusals, discriminatory content filtering, and opaque algorithmic processes affecting marginalised communities. Signals risks in automated public services and the need for algorithmic accountability mechanisms.
5. Representation in Digital Spaces Communities reported stereotypes or invisibility within institutional digital platforms and algorithmic systems. Points to the need for inclusive design and culturally responsive digital governance.
6. Generative AI (Use & Perception) CSOs/NGOs, in the summer school, expressed interest in generative AI tools but lack training and institutional support for safe and ethical usage. Suggests the need for public sector guidance and digital capacity-building around generative AI.
3. Data Divide 7. Data & Control CSOs/NGOs questioned who controls data infrastructures and under what governance frameworks. Connects territorial policy with debates on data sovereignty and public data governance.
8. Statistical Invisibility Marginalised groups are frequently absent from official datasets, leading to under-representation in policy design. Demonstrates the need for inclusive data infrastructures and improved statistical representation.
9. Digital Rights Incidents / Policy Gaps Experiences of data breaches, harmful chatbots, and automated decisions often lack clear complaint or redress mechanisms. Indicates the need for stronger digital rights protections and oversight frameworks.
4. Digital Futures 10. Vision: Just Digital Futures CSOs/NGOs envisioned inclusive digital ecosystems characterised by multilingual tools, community platforms, and ethical AI. Provides normative direction for democratic and inclusive digital governance.
Across these thematic blocks, ten analytical dimensions were identified that capture key aspects of digital inclusion and AI governance in territorial contexts. These dimensions include barriers to access, digital literacy and autonomy, device dependency and connectivity conditions, experiences of algorithmic discrimination, representation in digital environments, patterns of generative AI use and perception, data governance and control, statistical invisibility in datasets, incidents related to digital rights and governance gaps, and collective visions of inclusive digital futures.
Conceptually, the Decalogue aligns with emerging debates in AI governance and digital policy that emphasize the need to democratize AI ecosystems, address systemic bias embedded in algorithmic systems, and strengthen participatory oversight mechanisms within digital governance infrastructures [76,77,78,79,80,81,82]. These discussions increasingly stress the importance of developing governance frameworks capable of addressing algorithmic power asymmetries, ethical risks, and accountability challenges associated with the rapid deployment of artificial intelligence across public and institutional contexts [83,84,85,86,87,88,89,90].
The framework also connects with broader debates on digital sovereignty, digital public infrastructure, and inclusive innovation systems. These perspectives emphasize that data infrastructures and algorithmic systems should be governed in ways that safeguard public interest, reinforce democratic accountability, and promote socially inclusive technological development across territories and institutions [91,92,93,94,95,96,97,98,99,100]. In this context, governments and public institutions face the growing challenge of shaping AI ecosystems and digital infrastructures capable of supporting equitable participation in the digital economy while addressing structural inequalities in access, representation, and technological capability [101,102,103,104,105,106,107,108,109,110].
Within the research design, the Decalogue performs a dual function (Figure 5). First, it operates as a diagnostic instrument, enabling the identification of structural gaps and inequalities in digital inclusion and AI governance. Second, it acts as a roadmap for action research, guiding institutional dialogue and policy experimentation across stakeholder groups [111,112,113,114,115,116,117,118].
Through this dual role, the framework allows the research to translate dispersed stakeholder insights into a structured analytical matrix while maintaining the participatory and iterative character of the action research process.
The following subsections describe how the Decalogue was applied across the three stakeholder groups participating in the action research.

3.1.1. Six Civil Society Organizations/NGOs

The first stakeholder group consisted of six civil society organisations connected to Aniztasunaren Sarea (Diversity Network in Basque), a network representing organisations working with structurally excluded communities across Gipuzkoa. These communities include migrants, refugees, women in vulnerable contexts, people with disabilities, Roma communities, and transnational migrant networks.
The collaboration with Aniztasunaren Sarea constituted a central stage of the action research process. Through workshops and consultations conducted between March and July 2025, participants contributed experiential knowledge regarding how digital governance systems affect everyday interactions with public services and digital infrastructures. Figure 3 illustrated one of the preparatory workshops conducted with civil society organisations representing vulnerable communities. Through this participatory process, the project co-produced the Digital Inclusion and Generative AI Decalogue, which identifies ten key policy dimensions for addressing algorithmic risks, data governance challenges, and digital inequalities in territorial governance contexts.
The co-production of this framework draws on traditions of participatory governance and collaborative public innovation, where policy knowledge emerges through interaction between institutions and civil society actors rather than through top-down policy design. This approach is consistent with scholarship on co-production and participatory governance, which emphasises the importance of incorporating the lived experiences of affected communities into public policy design [111]. Figure 5 presented the conceptual structure of the Decalogue.
The six participating organisations—Jatorkin, AgiFugi, BidezBide, Elkartu, Haurralde, and Emigrados Sin Fronteras—contributed insights across the four thematic blocks of the Decalogue. Their contributions highlighted how digital inclusion challenges intersect with language barriers, legal status, accessibility constraints, gender inequalities, and structural forms of social exclusion. Across organisations, digital inclusion was framed not simply as a technical question of connectivity but as a structural governance challenge linked to institutional accessibility, representation, and rights protection. Participants reported persistent barriers related to language accessibility, device dependency, and the need for digital literacy support. They also identified experiences of algorithmic discrimination, particularly in automated administrative processes and digital platforms.
At the data governance level, civil society organisations emphasised concerns regarding statistical invisibility and the absence of vulnerable communities in official datasets used for public policy design. Participants also raised concerns regarding the governance of sensitive data, including migration status and biometric information. Despite identifying multiple governance challenges, organisations also articulated normative visions for inclusive digital futures, including multilingual public services, universal accessibility standards, feminist digital infrastructures, and community-centred digital ecosystems.
Table 3 summarises the application of the Decalogue framework to the six civil society organisations.

3.1.2. Seven Directorates with the Provincial Council of Gipuzkoa

The second stakeholder group consisted of seven directorates within the Provincial Council of Gipuzkoa, representing different policy domains including taxation, mobility and infrastructure, open governance, environmental governance, gender equality, and administrative modernisation.
Compared to civil society organisations, institutional actors framed digital inclusion and AI governance primarily in terms of administrative capability, service design, and governance frameworks.
Across directorates, digital inclusion was understood as a question of institutional accessibility. Officials emphasised the need to simplify administrative procedures, improve multilingual digital interfaces, and develop assistance tools enabling citizens to navigate complex bureaucratic systems.
Directorates also demonstrated strong awareness of potential risks associated with algorithmic decision-making systems. In policy areas such as taxation and mobility management, institutional actors highlighted the importance of preventing discriminatory outcomes in automated classification systems and emphasised governance instruments such as algorithmic impact assessments, ethical audits, and human oversight mechanisms.
Data governance emerged as another central concern. Several departments emphasised the need for clearer frameworks regulating data ownership, access permissions, and institutional accountability. Concerns were also raised regarding incomplete datasets and territorial blind spots that may undermine evidence-based policymaking.
Finally, directorates articulated visions of digital transformation grounded in institutional responsibility and democratic accountability. Examples included the development of ethical AI governance frameworks, transparency mechanisms for AI-assisted decision-making, and sustainability-oriented digital innovation strategies.
Table 4 summarises the application of the Decalogue framework across the seven directorates.

3.1.3. Eleven Municipalties

The third stakeholder group consisted of eleven municipalities across Gipuzkoa, providing insights into how digital inclusion and AI governance challenges are experienced at the local level.
Municipal administrations represent the territorial implementation layer of anticipatory AI governance. Compared to provincial institutions, municipal actors frequently operate under conditions of limited administrative resources and uneven digital infrastructure.
Evidence collected through the action research indicates that municipalities consistently encounter digital inclusion challenges related to generational digital divides, limited digital literacy, and reliance on shared community infrastructures such as libraries and civic centres. These infrastructures often function as essential access points for citizens navigating digital public services.
Municipal actors reported more limited direct experience with algorithmic decision systems but expressed concerns regarding discriminatory narratives in digital environments, online hate speech, and the potential for automated administrative procedures to reinforce existing social inequalities.
Data governance challenges were also highlighted at the municipal level. Several municipalities reported limited institutional capacity for managing complex data governance frameworks and emphasised the need for coordination with provincial and regional administrations.
Despite these constraints, municipal actors articulated visions of inclusive digital futures centred on citizen empowerment, digital literacy initiatives, and improved digital rights awareness (Figure 6).
Table 5 summarises the application of the Decalogue framework across the eleven participating municipalities.
Taken together, the three stakeholder perspectives demonstrate that anticipatory AI governance in Gipuzkoa emerges through multi-level institutional interaction. Civil society organisations contribute experiential knowledge regarding digital exclusion and algorithmic harms, provincial directorates provide governance capacity and policy frameworks, and municipalities highlight territorial implementation challenges and socio-spatial inequalities.
The Decalogue framework enables these perspectives to be analysed through a common analytical structure while preserving their institutional diversity. Within the action research design, the framework therefore functions as a territorial analytical grammar of digital inclusion and AI governance, supporting the development of a Science-for-Policy approach to anticipatory AI governance in the age of supercomputing.

3.2. Quantitative: Online Survey with Citizens (N=911)

The quantitative component of the research consisted of an online survey conducted among residents of Gipuzkoa in order to assess how AI is being incorporated into everyday life and how its societal implications are perceived across the territory [129]. The survey complements the qualitative action research by providing population-level evidence on citizens’ experiences, attitudes, and expectations regarding AI governance.
The questionnaire was designed collaboratively by the research team and implemented with technical support from the survey research company Ikerfel. It was administered online during 27th January – 5th February 2026 and collected responses from 911 residents of Gipuzkoa. Participation was voluntary and anonymous, and respondents were able to complete the questionnaire in Basque and Spanish, ensuring linguistic accessibility across the territory (Appendix). The average completion time was approximately fifteen minutes.
The survey instrument was structured around a set of thematic dimensions corresponding to the broader analytical framework of the decalogue (Table 2 and Figure 5) research, namely digital inclusion, social impact, governance expectations, and institutional trust regarding AI systems. The questionnaire contained both closed-ended and Likert-scale questions, allowing the measurement of frequencies, attitudes, and perceptions across multiple domains of social life.
The survey instrument was designed so that its items could be analytically correlated with the ten dimensions of the Decalogue introduced above. In methodological terms, the questionnaire did not reproduce the Decalogue verbatim, but operationalised its core concerns through a set of survey items covering AI use, knowledge, privacy, wellbeing, gender, youth, data sharing, public-sector automation, community uses, and environmental sustainability. The questionnaire explicitly included items on: personal and professional use of AI (Q4–Q9), knowledge of AI (Q10), privacy and data protection (Q11–Q12), wellbeing and dependency (Q13–Q16), everyday effects (Q21–Q23), gender gaps (Q24–Q28), youth autonomy and life projects (Q29–Q32), data sharing and ethical responsibility (Q33–Q35), community uses of AI (Q36–Q37), sustainability and data centres (Q38–Q39), and public-administration automation and chatbots (Q40–Q43).
This correlation between questionnaire items and Decalogue dimensions allows the survey to be interpreted not merely as a descriptive opinion poll, but as a quantitative operationalisation of the article’s broader analytical framework. Thus, the citizen survey provides the population-level counterpart to the action research undertaken with civil society organisations, provincial directorates, and municipalities. In doing so, it helps assess whether the social conditions required for anticipatory AI governance—digital inclusion, trust, accountability, data stewardship, and civic legitimacy—are present across the territory. The broad headline figures publicly reported in the Berria newspaper article are consistent with this interpretation [129]: AI is already widely used in Gipuzkoa, but its social acceptance is conditional on transparency, privacy, sustainability, and meaningful human oversight.
From the perspective of the Decalogue, the first block, Digital Divide, reveals a mixed picture. On the one hand, uptake is high and AI is already used for learning or study by 47.4% of respondents, for work by 46.8%, for shopping and services by 39.8%, and for health and wellbeing tracking by 32.3%. On the other hand, only 46.3% report that they know well or understand in depth what AI is, which suggests that diffusion is broader than critical literacy. This finding is reinforced by the marked age gradient noted above. Quantitatively, therefore, digital inclusion in Gipuzkoa cannot be reduced to access alone; it also concerns the uneven distribution of the cognitive and practical capacities required to use AI meaningfully, safely, and autonomously. This is fully consistent with the questionnaire’s focus on AI use, familiarity, and everyday effects.
The second block, Algorithmic Bias, shows that citizens perceive clear risks even while recognising usefulness. In the survey database, 71.8% agree or strongly agree that AI can create privacy risks because personal data may be used without users clearly knowing how or by whom. Likewise, 43.0% perceive a high or very high dependency risk in AI use among people around them, close to the 44%. In the specific case of public administration, 46.8% identify lack of transparency and discrimination in automated decisions as a concrete risk, while 35.3% identify exclusion through the digitalisation of bureaucracy. These results indicate that the social impact of AI is not interpreted only through efficiency gains, but also through concerns about opacity, asymmetry, and unequal exposure to harm.
The third block, Data Divide, is equally significant. Only 4.0% of respondents say they would share personal data in any case for research or social benefit, whereas 47.5% would do so only under conditional arrangements—either if privacy were guaranteed or if the data were anonymised—and 32.5% would not share their data. In parallel, 53.7% believe governments should guarantee the ethical development of AI, and 38.2% assign that responsibility to universities and research centres. These findings suggest that citizens do not reject data use per se, but they strongly condition it on institutional safeguards, privacy guarantees, and public-interest governance. In this sense, the survey confirms that data governance is a central component of anticipatory AI governance and not a secondary technical matter.
The fourth block, Digital Futures, reveals conditional openness rather than technophobic resistance. In the survey database, 63.7% consider it acceptable to automate document and administrative procedures, and 57.9% support automation in fiscal and tax management. Yet 59.9% say they still prefer face-to-face attention in public services, while only a very small minority consider chatbots superior; it reports that 61% prioritise in-person attention. This indicates that the preferred model is not one of full substitution, but of selective, bounded, and accountable automation combined with human presence. At the same time, 55.9% support specific measures to reconcile AI development with environmental sustainability, and 51.6% support stricter sustainability criteria given the environmental impact of data centres. These results are especially important for the present article because they show that the social legitimacy of AI in Gipuzkoa is tied not only to efficiency and usability, but also to ecological and democratic conditions [64].
Taken together, the survey provides a quantitative answer to the article’s research question. If anticipatory AI governance in the age of supercomputing is to be operationalised at the city-regional scale, it cannot be based solely on advanced compute capacity or technological leadership. The citizen data show that the social impact of AI—and by extension of the supercomputing infrastructures that enable it—is mediated by four conditions: first, whether citizens possess the skills and confidence to use AI autonomously; second, whether systems are transparent and contestable; third, whether data governance is publicly trusted and privacy-preserving; and fourth, whether public-sector automation remains human-centred, territorially inclusive, and environmentally accountable. Supercomputing’s social impact should therefore be interpreted not as an external effect to be measured after deployment, but as a prior governance challenge whose legitimacy depends on institutional safeguards and civic trust. This is precisely where the survey complements the qualitative action research: together, they show that the governance of advanced computational infrastructures must be socially grounded if it is to remain democratically legitimate (Table 6 and Table 7).
The findings provide several insights into how anticipatory AI governance may function in territorially grounded institutional environments.
First, the results reveal a structural distinction between technological adoption and digital autonomy. AI is already widely embedded in everyday life in Gipuzkoa, with approximately three-quarters of respondents reporting that they use or have used AI tools. However, only around half of respondents report a good or deep understanding of AI. This gap indicates that the diffusion of AI technologies does not necessarily translate into critical digital literacy or autonomous use. From a governance perspective, digital inclusion must therefore be understood not merely as access to technological tools but as the capacity to interpret, evaluate, and interact with algorithmic systems in meaningful ways.
Second, the study identifies a significant generational digital divide. AI use among respondents aged 16–34 reaches nearly ninety percent, while adoption among individuals aged 55 and above remains substantially lower. This finding suggests that the principal divide is not primarily ideological or attitudinal but practical and capability-based. In increasingly automated administrative environments, such disparities may translate into unequal access to public services and digital opportunities. Anticipatory AI governance must therefore incorporate targeted policies addressing age-based inequalities in digital skills and technological familiarity.
Third, the results highlight ambivalent public perceptions of administrative digitalisation. While citizens broadly support the automation of repetitive tasks in public administration, a majority still prefer face-to-face interactions when dealing with public services. At the same time, a substantial proportion of respondents believe that the digitalisation of bureaucracy may exclude certain groups. This combination of conditional acceptance and cautious scepticism indicates that citizens favour a hybrid governance model in which AI augments administrative capacity without replacing human interaction in sensitive or consequential procedures.
Fourth, the findings demonstrate that citizens simultaneously adopt AI technologies and express concern about their risks. Large majorities perceive potential threats to privacy and data protection associated with AI systems. Similarly, many respondents identify the lack of transparency and possible discrimination in automated decision-making as significant governance challenges. These results suggest that the legitimacy of AI adoption depends strongly on institutional safeguards, including transparency mechanisms, algorithmic accountability frameworks, and accessible channels for contesting automated decisions.
Fifth, data governance emerges as a central condition for public trust. Only a small minority of respondents would share personal data without restrictions, whereas most would do so only under conditions of strong privacy guarantees or anonymisation. At the same time, more than half believe that governments should play a leading role in ensuring the ethical development of AI systems. This finding indicates that citizens do not reject data use in principle but demand clear institutional frameworks governing how data are collected, processed, and reused.
Sixth, the survey highlights emerging concerns about the broader societal implications of AI infrastructures. Respondents express significant awareness of environmental sustainability issues related to data centres and computational infrastructures. Many support stricter sustainability criteria for AI development and measures designed to reconcile technological innovation with environmental protection. This finding is particularly relevant in the context of supercomputing ecosystems, where the environmental footprint of advanced computational infrastructures increasingly forms part of public debate.
Seventh, territorial differences observed across the nine counties of Gipuzkoa suggest that digital transformation is unevenly distributed across the region. Urban and metropolitan areas demonstrate slightly higher levels of AI use, digital literacy, and experimentation with automated services, whereas rural or peripheral areas display somewhat lower indicators across these dimensions. These patterns reinforce the importance of place-based policy approaches when designing digital inclusion strategies and AI governance frameworks.
Taken together, these findings support the central argument of the article: city-regional administrations can function as laboratories for anticipatory AI governance when policy experimentation is combined with empirical evidence, civic participation, and advanced computational infrastructures. In the case of Gipuzkoa, the coexistence of a devolved governance framework, a dense civic ecosystem, and an emerging supercomputing landscape provides a distinctive institutional environment in which to test such governance approaches.
The study also demonstrates the analytical value of mixed-methods multistakeholder research. Action research with civil society organisations, provincial directorates, and municipalities made it possible to identify institutional capacity gaps and governance challenges that might otherwise remain invisible. The citizen survey, in turn, revealed how AI adoption and its perceived risks are distributed across the wider population. When analysed together, these two forms of evidence provide a multi-scalar perspective linking institutional experimentation, territorial governance dynamics, and societal perceptions of AI.
From a policy perspective, the results suggest that the social legitimacy of AI and supercomputing infrastructures depends on four interrelated conditions.
First, digital inclusion must be strengthened through investments in digital literacy, algorithmic awareness, and community-based learning infrastructures. Second, algorithmic governance frameworks must ensure transparency, human oversight, and mechanisms for contesting automated decisions. Third, data governance must prioritise privacy protection, institutional accountability, and public-interest data stewardship. Fourth, the deployment of AI within public administration should maintain human-centred service design while ensuring environmental sustainability and territorial inclusiveness.
The Gipuzkoa case therefore illustrates how anticipatory AI governance can be operationalised through a territorially grounded governance architecture that connects technological infrastructures with democratic accountability and social inclusion. Rather than treating the societal implications of supercomputing as external effects to be addressed after deployment, the study shows that these implications must be integrated into governance frameworks from the outset.
Several limitations should be acknowledged. First, the study focuses on a single city-regional territory, which limits the generalisability of the findings to other governance contexts. Second, the citizen survey captures perceptions at a specific moment in time, whereas attitudes toward AI may evolve rapidly as technologies and public policies change. Third, although the mixed-methods design captures multiple stakeholder perspectives, future research could expand the analysis to include private-sector actors and technological developers within the regional AI ecosystem.
Future research could therefore explore comparative analyses across regions hosting advanced computational infrastructures, examine longitudinal changes in citizen perceptions of AI governance, and investigate how anticipatory governance frameworks interact with emerging regulatory regimes such as the European AI Act.
Despite these limitations, the study contributes to ongoing debates on the governance of AI and supercomputing by demonstrating that anticipatory AI governance is not merely a normative aspiration but a practical institutional challenge. Addressing this challenge requires integrating technological innovation with civic participation, data governance frameworks, and territorially grounded public policy experimentation. In this sense, anticipatory AI governance emerges as a crucial public capability for navigating the democratic, social, and environmental implications of artificial intelligence in the age of supercomputing.

4. Discussion: Digital Inclusion Index and AI Governance Perception Index

This article examined how city-regional governments can operationalise anticipatory AI governance in the age of supercomputing through a mixed-methods multistakeholder framework involving civil society organisations, institutional departments, municipalities, and citizens. The evidence from Gipuzkoa suggests that anticipatory AI governance is best understood not as a purely regulatory or technological exercise, but as a territorial public capability that links advanced computational infrastructures, institutional learning, participatory governance, and digital inclusion. This interpretation is consistent with the growing literature on AI in public governance, which argues that the introduction of AI into public administration generates both opportunities for service innovation and risks related to opacity, accountability, and democratic legitimacy [1,2,3,4,31,67,68].
The findings support the working hypothesis of the article. In the Gipuzkoa case, anticipatory AI governance is operationalised through the interaction of four governance arenas: civil society organisations, provincial institutions, municipalities, and citizens. This multistakeholder configuration matters because each actor identifies different but complementary dimensions of AI governance. Civil society organisations foregrounded language barriers, precarious connectivity, algorithmic exclusion, and statistical invisibility affecting migrants, women, disabled users, and other structurally marginalised groups. Municipalities revealed the territorial unevenness of digital infrastructures, the continued centrality of shared public spaces such as libraries and civic centres, and the limited local capacity to respond to digital rights incidents. Provincial directorates framed AI governance more in terms of institutional capability, including impact assessments, ethical oversight, data governance, and trustworthy chatbot design. These findings align with work emphasising co-production and action research as ways to generate policy-relevant knowledge under conditions of uncertainty [21,33,34,52,110].
The citizen survey complements these qualitative findings by showing that AI is already socially embedded across Gipuzkoa, but that its legitimacy remains conditional. According to the survey, 72.7% of respondents reported using or having used AI tools, and 62.2% of AI users stated that they use them at least weekly. Yet only 46.3% reported good or deep knowledge of AI, indicating a substantial gap between adoption and understanding. This pattern is consistent with research showing that public-sector AI adoption often advances more rapidly than institutional and civic capacity to interpret, scrutinise, and govern it [1,2,9,31]. It also supports the argument that digital inclusion in AI-mediated environments cannot be reduced to access alone, but must include cognitive autonomy, critical literacy, and the ability to challenge or contest algorithmically mediated outcomes [23,24,31].
A particularly important result concerns the age gradient. AI use reaches 89.1% among respondents aged 16–34, but only 52.4% among those aged 55 and over. This generational gap suggests that the principal divide is not simply attitudinal, but practical and capability-based. In the context of increasingly digitalised public services, such disparities may produce indirect forms of exclusion, especially where administrative systems presume familiarity with chatbots, automated forms, or AI-assisted interfaces. This finding is consistent with broader work on digital public infrastructure and digital government, which stresses that inclusive service design must address unequal capability across populations rather than assuming uniform uptake [53,54,70,96].
The survey also indicates that the public acceptance of AI is shaped by a strong demand for safeguards. On the one hand, citizens appear open to bounded automation: 63.7% support automating documents and administrative procedures, and 42.3% have already used an automated chatbot to communicate with an administration. On the other hand, 59.9% still prefer face-to-face attention in public services, 71.8% agree that AI creates privacy risks, 35.3% identify exclusion through digitalised bureaucracy as a public-sector risk, and 46.8% identify lack of transparency or discrimination in automated decisions. These results echo previous studies showing that the legitimacy of AI in public administration depends on transparency, contestability, and meaningful human oversight [6,8,11,28,31,42,46]. They also resonate with the lessons of Robodebt and SyRI, where automation without adequate accountability or rights protection led to severe democratic and legal harms [5,7,11].
The mixed-methods findings therefore suggest that anticipatory AI governance requires more than ethics principles or general regulatory commitments. In line with anticipatory governance scholarship, it requires institutions capable of acting under uncertainty through foresight, experimentation, reflexivity, and iterative learning [14,15,47]. In this sense, the Gipuzkoa case supports an institutional interpretation of anticipatory governance similar to that proposed in mission-oriented and market-shaping approaches, where public administrations are expected not merely to regulate after harms arise, but to shape socio-technical trajectories in advance [16,17]. This is especially relevant in contexts where AI development is linked to supercomputing and quantum infrastructures, because the scale and strategic significance of those infrastructures may otherwise encourage a narrow competitiveness logic detached from democratic accountability [25,26,27,28,40,63,77].
A distinctive contribution of this article is to connect these governance dynamics with an emerging supercomputing ecosystem. The Basque Country is positioning itself within Europe’s advanced computing landscape through initiatives such as the IBM Quantum System Two in Donostia–San Sebastián [30]. However, the present findings indicate that advanced computational capacity does not automatically generate public value. Rather, its legitimacy depends on whether institutional actors can connect infrastructure development with digital inclusion, public-interest data governance, and trustworthy administrative experimentation. This argument is consistent with recent work on digital public infrastructure and AI sovereignty, which stresses that compute, data, and governance capacity must be aligned if AI ecosystems are to remain democratically governable [40,41,53,54,63,70,92,93].
The territorial dimension of the findings is especially important. Building on the county-level analysis, two exploratory composite indicators were derived from the survey: a Digital Inclusion Index and an AI Governance Perception Index. The Digital Inclusion Index was constructed by averaging six county-level indicators associated with access, literacy, and everyday use: Q4 (AI use), Q10 (good or deep AI knowledge), Q7 (AI for learning/study), Q9 (AI for work), Q5 (weekly AI use), and Q40 (administrative chatbot use). The AI Governance Perception Index was constructed by averaging seven county-level indicators associated with rights, trust, and accountability: Q11 (privacy risks), Q35 (bureaucratic exclusion), Q35 (lack of transparency/discrimination), Q33 (conditional data sharing), Q33 (refusal to share personal data), Q34 (government responsibility for ethical AI), and Q41 (preference for face-to-face public service). These indicators are analytically derived from the questionnaire items and should therefore be interpreted as exploratory composite measures rather than validated scales (Table 8).
The county results show a modest but meaningful territorial differentiation. For the Digital Inclusion Index, Donostialdea records the highest value (57.6), followed by Oarsoaldea (54.6), Debagoiena (54.2), Goierri (53.9), Bidasoa-Oiartzun (53.5), Tolosaldea (53.4), Urola Garaia (53.0), Urola Kosta (52.8), and Debabarrena (52.3). For the AI Governance Perception Index, Donostialdea again scores highest (50.1), followed by Oarsoaldea and Debagoiena (49.6), Goierri (49.4), Tolosaldea (49.3), Urola Garaia (49.2), Bidasoa-Oiartzun and Urola Kosta (49.1), and Debabarrena (49.0). Although the differences are not large, they suggest that AI uptake, literacy, and governance concerns are not distributed uniformly across the territory. This reinforces the argument that AI governance is territorially mediated and that city-regional administrations must address not only institutional capacity but also spatially uneven social conditions [12,13,23,29] (Table 9).
From this perspective, the Gipuzkoa case offers a useful complement to other supercomputing and advanced-compute ecosystems worldwide. Barcelona combines the Barcelona Supercomputing Center with a broader digital rights and urban governance agenda, illustrating how advanced computing infrastructures can be connected with municipal democratic innovation and citizen-oriented digital policies [13,98]. Finland’s LUMI supercomputer represents another governance model in which high-performance computing is integrated with regional development strategies and environmental sustainability considerations, highlighting the growing relevance of AI’s dual environmental and social footprint [36,64,65,66]. In contrast, the Frontier supercomputer at Oak Ridge National Laboratory in the United States exemplifies a more centralised and state-driven governance model in which advanced computational infrastructures are embedded primarily within national research, defence, and technological competitiveness strategies. Compared with these cases, Gipuzkoa illustrates a more explicitly city-regional and multistakeholder pathway, where supercomputing capacity is embedded within a devolved governance system characterised by civic participation, territorial experimentation, and digital inclusion concerns [20,23,29,111].
The results of the online citizen survey reinforce this distinctive governance configuration. While supercomputing infrastructures are often discussed primarily in terms of technological capacity or national competitiveness, the Gipuzkoa case highlights the importance of societal adoption and governance perceptions at the territorial level. The survey shows that 72.7% of respondents have used AI tools, with 62.2% of AI users reporting at least weekly use, suggesting that AI technologies are already deeply embedded in everyday life across the territory. This level of adoption is broadly comparable to trends observed in other advanced digital ecosystems where AI tools have rapidly diffused through cloud services, generative AI applications, and platform-based infrastructures [1,4]. However, the Gipuzkoa survey also reveals an important capability gap: only 46.3% of respondents report good or deep knowledge of AI, indicating that technological adoption is advancing more rapidly than algorithmic literacy. Such disparities highlight the importance of governance approaches that combine technological innovation with digital capability-building, a concern also raised in recent debates on responsible AI governance in public administration [31,68].
Public perceptions of governance risks further differentiate the Gipuzkoa case. A large majority of respondents (71.8%) agree that AI may create privacy risks, while 46.8% identify potential lack of transparency or discrimination in automated decisions, reflecting concerns similar to those observed in international debates on algorithmic governance and automated public decision-making [6,8,31]. At the same time, the results reveal a pragmatic attitude toward administrative automation: 63.7% of respondents support the automation of administrative procedures, yet 59.9% still prefer face-to-face interaction with public services. This preference for hybrid governance models contrasts with more technocratic approaches observed in some centralised AI governance environments and instead aligns with emerging models of human-centred digital public infrastructure, where automation is designed to augment rather than replace human oversight [53,54,70].
The territorial analysis further reinforces these findings. The Digital Inclusion Index and the AI Governance Perception Index show modest variation across Gipuzkoa’s counties, suggesting that while digital capabilities remain somewhat concentrated in the metropolitan innovation ecosystem of Donostialdea, governance perceptions and concerns about privacy, transparency, and accountability are broadly shared across the territory. This pattern contrasts with many large national AI ecosystems, where technological innovation tends to be geographically concentrated while public awareness remains uneven. In the Gipuzkoa case, the relatively homogeneous distribution of governance concerns indicates that public debate about AI and algorithmic governance is diffusing widely across the territory, reinforcing the argument that city-regional institutions can function as effective arenas for anticipatory AI governance.
Taken together, the comparison suggests that the Gipuzkoa model represents a territorial governance approach to supercomputing ecosystems, in which advanced computational infrastructures are embedded within participatory governance processes, local institutional experimentation, and citizen-centred digital inclusion strategies. While national-scale supercomputing initiatives often emphasise technological competitiveness and geopolitical positioning, the Gipuzkoa experience highlights the importance of aligning compute capacity with democratic accountability, civic trust, and territorial policy innovation.
The discussion also highlights several broader implications for the literature. First, the findings reinforce calls to move beyond abstract AI ethics toward practical governance capacities in public administration [6,9,18,42,68]. Secondly, they support recent arguments that public-sector AI should be analysed through the lens of digital public infrastructure, data sovereignty, and institutional capability rather than as a set of isolated applications [53,54,55,70,75,94]. Thirdly, the study contributes to territorial innovation debates by showing that the social impact of AI and supercomputing is mediated through regional institutions, local implementation settings, and civic infrastructures [12,13,29,60,107]. Finally, the results speak to wider democratic concerns, since the legitimacy of AI in government depends on whether citizens can trust the institutions deploying it and whether they retain meaningful routes for understanding and contesting automated systems [38,49,50,83,104].
These findings suggest four practical implications. First, digital inclusion policies must go beyond connectivity and focus on algorithmic literacy, multilingual support, accessibility, and assisted navigation of digital public services. Secondly, public administrations need hybrid service models in which AI augments administrative capacity without displacing human interaction in high-stakes or sensitive settings. Thirdly, public-interest data governance must become a central part of AI strategy, including clear rules for data access, reuse, privacy, and accountability. Fourthly, advanced computational infrastructures such as supercomputing and quantum systems should be governed not only as innovation assets but as socio-technical infrastructures whose legitimacy depends on democratic oversight, environmental responsibility, and territorial inclusion [16,17,18,25,26,27,28,36,64].
Several limitations remain. The study focuses on a single territorial case, and the composite indices introduced here are exploratory. The survey captures perceptions at a particular moment in time, and those perceptions may evolve rapidly as Generative AI becomes more embedded in daily life and public administration. Future research should therefore compare multiple supercomputing territories, refine and validate territorial AI governance indices, and examine longitudinal changes in civic trust, institutional capacity, and the environmental and distributive consequences of AI infrastructures [36,37,61,62,67,69]. Comparative work with other city-regional and national supercomputing ecosystems would be particularly valuable for identifying which governance arrangements most effectively align compute capacity, digital inclusion, and democratic accountability.
Overall, the Gipuzkoa case suggests that city-regional governments can operationalise anticipatory AI governance when they combine multistakeholder co-production, mixed-methods evidence, territorial experimentation, and public-interest data governance within a broader strategy of democratic digital transformation. In that sense, supercomputing’s social impact should not be treated as an externality to be assessed after deployment. It should instead be governed from the outset as a territorial question of inclusion, legitimacy, and institutional capability.

5. Conclusions

This article examined how city-regional governments can operationalise anticipatory AI governance in the age of supercomputing through a mixed-methods multistakeholder framework involving civil society organisations, institutional departments, municipalities, and citizens. Using the Basque city-regional territory of Gipuzkoa as an empirical case, the study combined qualitative action research with three stakeholder groups and a citizen survey (N = 911) in order to analyse how advanced computational infrastructures intersect with digital inclusion, democratic accountability, and public perceptions of artificial intelligence.
The most distinctive feature of the Gipuzkoa case is not simply the diffusion of AI technologies but the simultaneous emergence of widespread public awareness of algorithmic governance risks, suggesting that advanced digital societies may develop forms of “civic AI literacy” even before formal institutional governance frameworks fully mature.
The results provide a clear answer to the research question. Anticipatory AI governance at the city-regional scale can be operationalised when institutional experimentation, civic participation, and empirical evidence are combined within a territorially grounded governance architecture. In the Gipuzkoa case, this architecture emerges through the interaction of four governance arenas: civil society organisations identifying digital exclusion and algorithmic harms, provincial institutions developing governance frameworks and administrative capacities, municipalities mediating territorial implementation, and citizens expressing conditional acceptance of AI technologies. This multistakeholder configuration demonstrates that AI governance is not merely a regulatory exercise but a broader institutional capability linking infrastructure development, public policy experimentation, and civic legitimacy [14,15,16,17,31].
The findings also reveal a structural distinction between technological diffusion and digital autonomy. Although AI adoption is already widespread—more than seventy percent of respondents report using AI tools—less than half report a good or deep understanding of how these systems function. This gap indicates that technological uptake is advancing more rapidly than algorithmic literacy and reinforces the argument that digital inclusion must encompass not only access but also cognitive capacity, critical awareness, and the ability to contest algorithmic decisions [23,31]. The generational gradient observed in the survey further confirms this point: younger respondents report significantly higher levels of AI use than older cohorts, suggesting that unequal digital capabilities may translate into unequal access to public services in increasingly automated administrative environments.
At the same time, the study shows that public acceptance of AI is strongly conditioned by governance safeguards. Citizens generally support bounded forms of automation in public administration, particularly for routine administrative tasks. However, a majority still prefer face-to-face interaction with public services and express strong concerns about privacy, transparency, and algorithmic discrimination. These findings confirm previous research indicating that the legitimacy of AI in public administration depends on the presence of human oversight, institutional accountability, and mechanisms allowing citizens to understand and contest automated decisions [6,8,11].
The territorial analysis reinforces this interpretation. The Digital Inclusion Index and the AI Governance Perception Index developed for the study reveal modest but meaningful spatial variation across Gipuzkoa’s counties. Metropolitan areas such as Donostialdea demonstrate slightly higher levels of digital inclusion, while governance concerns related to privacy, transparency, and accountability appear broadly distributed across the territory. These results highlight the importance of place-based policy approaches, confirming that AI governance is mediated by territorial institutional capacities and local social conditions rather than by technological infrastructures alone [12,13,29].
The comparison with other advanced-compute ecosystems further clarifies the distinctive character of the Gipuzkoa model. Whereas initiatives such as Barcelona’s supercomputing ecosystem connect advanced computing with urban digital rights agendas, and infrastructures such as Finland’s LUMI emphasise sustainability and regional innovation, large-scale facilities like Oak Ridge’s Frontier supercomputer operate primarily within centralised national strategies. By contrast, the Gipuzkoa case illustrates a city-regional pathway in which advanced computing infrastructures are embedded within devolved governance systems characterised by civic participation, policy experimentation, and territorial digital inclusion strategies. This suggests that subnational institutions may function as laboratories for anticipatory AI governance when technological infrastructures are aligned with democratic governance and public-interest innovation [20,23,29].
From a policy perspective, four implications emerge. First, digital inclusion strategies must move beyond connectivity and incorporate algorithmic literacy, multilingual accessibility, and assisted navigation of digital public services. Second, hybrid service models are required in which AI augments administrative capacity while maintaining human oversight in sensitive or consequential decisions. Third, data governance frameworks must prioritise privacy, accountability, and public-interest stewardship in order to sustain citizen trust. Fourth, supercomputing and advanced computational infrastructures should be governed as socio-technical systems whose legitimacy depends on democratic oversight, territorial inclusion, and environmental sustainability rather than on technological competitiveness alone [36,64,65,66].
Despite these contributions, several limitations must be acknowledged. The study focuses on a single territorial case, which limits the generalisability of its findings. The Digital Inclusion Index and AI Governance Perception Index introduced here should therefore be interpreted as exploratory analytical tools rather than validated measurement scales. Moreover, the citizen survey captures perceptions at a specific moment in time, whereas attitudes toward AI may evolve rapidly as generative AI technologies become more embedded in public services and everyday life.
Future research should therefore pursue three directions. First, comparative studies across regions hosting advanced computational infrastructures would allow researchers to examine how different governance models shape the social impact of AI and supercomputing. Second, longitudinal studies could analyse how citizen perceptions of AI governance evolve over time as regulatory frameworks, technological capabilities, and institutional practices develop. Third, further research is needed to refine territorial AI governance indicators capable of measuring the relationship between digital inclusion, institutional capacity, and democratic legitimacy in AI-mediated governance systems.
In conclusion, the Gipuzkoa case demonstrates that anticipatory AI governance is not simply a normative aspiration but a practical institutional challenge. When advanced computational infrastructures are embedded within multistakeholder governance frameworks, supported by empirical evidence and oriented toward public-interest data governance, city-regional administrations can function as laboratories for democratic experimentation in the governance of artificial intelligence. In this sense, the societal implications of supercomputing should not be treated as external effects to be evaluated after technological deployment. Instead, they must be addressed from the outset as a central question of territorial governance, democratic accountability, and digital inclusion in the age of AI.

Author Contributions

Conceptualization, I.C.; Methodology, I.C.; Software, I.C.; Validation, I.C.; Formal analysis, I.C.; Investigation, I.C. and I.E.; Resources, I.C. and I.E.; Data curation, I.C.; Writing – original draft, I.C.; Writing – review and editing, I.C. and I.E.; Visualization, I.C.; Supervision, I.C.; Project administration, I.C. and I.E.; Funding acquisition, I.C. and I.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by (i) Gipuzkoa Province Council, Human Rights & Democratic Culture Directorate: Action Research Programme (PT10937) including xiii.1. Digital Inclusion & Generative AI International Summer School Scientific Direction, 15–16 July 2025, Donostia-St. Sebastian, Spain; xiii.2. Anticipatory AI Governance, and xiii.3. EcoTechnoPolitics (ETP); (ii) European Commission, Horizon 2020, H2020-MSCACOFUND-2020-101034228-WOLFRAM2: Ikerbasque Start Up Fund, 3021.23.EMAJ; (iii) UPV-EHU, Research Groups, IT 1541-22; (iv) Ayuda en Acci.n NGO, Innovation & Impact Unit, Re-search Contract: Scientific Direction and Strategic Advisory, Social Innovation Platforms in the Age of Artificial Intelligence (AI) (www.designingopportunities.org, accessed on 1 November 2025) and AI for Social Innovation. Beyond the Noise of Algorithms and Datafication Summer School Scientific Direction, 2–3 September 2024, Donostia-St. Sebastian, Spain (https://www.uik.eus/en/activity/artificial-intelligence-social-innovation-ai4si, accessed on 1 July 2024), PT10863; (v) Presidency of the Basque Government, External Affairs General Secretary, Basque Communities Abroad Direction, Scientific Direction and Strategic Advisory e-Diaspora Platform Han-Hemen (https://cordis.europa.eu/project/id/101120657, accessed on 1 November 2025), PT10859; (vi) European Commission, Horizon Europe, ENFIELD European Lighthouse to Manifest Trustworthy and Green AI, HORIZON-CL4-2022-HUMAN-02-02-101120657; SGA oc1-2024-TES-01-01, https://cordis.europa.eu/project/id/101120657 (accessed on 1 November 2025). Invited Professor at BME, Budapest University of Technology and Economics (Hungary) (https://www.tmit.bme.hu/speechlab?language=en; accessed on 1 November 2025); (vii) Gipuzkoa Province Council, Etorkizuna Eraikiz 2024: AI’s Social Impact in the Historical Province of Gipuzkoa (AI4SI). 2024-LAB2-007-01. www.etorkizunaeraikiz.eus/en/ (accessed on 1 November 2025) and https://www.uik.eus/eu/jarduera/adimen-artifiziala-gizarte-berrikuntzarako-ai4si (accessed on 1 November 2025); (viii) Warsaw School of Economics SGH (Poland) by RID LEAD, Regional Excellence Initiative Programme (https://rid.sgh.waw.pl/en/grants-0 (accessed on 1 November 2025)) and https://www.sgh.waw.pl/knop/en/conferences-and-seminars-organized-by-the-institute-ofenterprise (accessed on 1 November 2025) and https://www.sgh.waw.pl/knop/en/conferences-andseminars-organized-by-the-institute-of-enterprise (accessed on 1 November 2025); (ix) SOAM Residence Programme: Network Sovereignties (Germany) via BlockchainGov (www.soam.earth); (x) Decentralization Research Centre (Canada) (www.thedrcenter.org/fellows-and-team/igor-calzada/ (accessed on 1 November 2025)); (xi) The Learned Society of Wales (LSW) 524205; (xii) Fulbright Scholar-In-Residence (S-I-R) Award 2022-23, PS00334379 by the US–UK Fulbright Commission and IIE, US Department of State at the California State University; (xiii) the Economic and Social Research Council (ESRC) ES/S012435/1 “WISERD Civil Society: Changing Perspectives on Civic Stratification/Repair”; and (xiv) Astera Institute, Cosmik Data Cooperatives for Open Science. Views and opinions expressed however those of the author only and do not necessarily reflect those of these institutions. None of them can be held responsible for them.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used for the research described in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix

Questionnaire (Translated from Basque and Spanish into English)

Questionnaire: Social Impact of Artificial Intelligence among the Residents in Gipuzkoa
Q0. In which language would you prefer to answer this questionnaire? (Basque / Spanish)
Q1. What is your sex/gender? (Woman / Man / Other)
Q2. How old are you? (Open response)
Q3. What is the postal code of the municipality where you live?
Q4. Do you use or have you used Artificial Intelligence applications or tools personally or professionally? (Yes / No – If No, skip to Q6)
Q5. How often do you use them? (Several times a day / Daily / Several times a week / Several times a month / Occasionally)
Q6. Have you used AI to compare prices and facilitate purchases (e.g., price comparators, Amazon, Google Shopping)?
Q7. Have you used AI to learn or study (e.g., educational platforms, ChatGPT, Duolingo)?
Q8. Have you used AI for health and wellbeing monitoring (e.g., Fitbit, Apple Health)?
Q9. Have you used AI for work tasks (e.g., writing texts, creating presentations, analyzing information)?
Q10. To what extent would you say you know what Artificial Intelligence is?
Q11. AI may pose risks to privacy because personal data can be used without users knowing how or by whom. (Level of agreement)
Q12. AI can improve information protection, fraud detection, and enable more personalized services. (Level of agreement)
Q13. In your case or in your environment, do you think AI can help improve psychological wellbeing and mental health?
Q14. AI can help strengthen social relationships. (Impact scale)
Q15. AI contributes to improving self-esteem. (Impact scale)
Q16. To what extent do you see a risk of dependence on AI technologies among people around you?
Q17. AI has reduced my workload. (Work impact scale)
Q18. AI has increased my level of stress. (Work impact scale)
Q19. AI has helped me be more creative. (Work impact scale)
Q20. AI has helped me make better decisions. (Work impact scale)
Q21. How has AI influenced purchases and services in your daily life?
Q22. How has AI influenced social and digital relationships in your daily life?
Q23. How has AI influenced administrative requirements in your daily life?
Q24. In your opinion, how does AI influence gender gaps? (Reduces / No effect / Increases / Don’t know)
Q25. In the workplace, who uses AI more? (Men more / Similar / Women more / Don’t know)
Q26. In leisure activities, who uses AI more?
Q27. In education and learning, who uses AI more?
Q28. In social networks and relationships, who uses AI more?
Q29. In your opinion, AI offers more opportunities to: (Young people / All generations equally / Adults / Don’t know)
Q30. As a guidance and training tool for youth, AI is: (Useful / Neutral / Useless / Don’t know)
Q31. Do you think AI facilitates or will facilitate the economic independence of young people?
Q32. How will AI affect youth mental health and emotional wellbeing?
Q33. Would you be willing to share your personal data for research or societal benefit?
Q34. Who should ensure the ethical development of AI? (Multiple choice)
Q35. What risks do you perceive in the use of AI in public administrations? (Multiple choice)
Q36. AI can be a key tool to reinvent volunteering and strengthen community work. (Level of agreement)
Q37. It should be a priority to implement community-based AI solutions in the towns and neighborhoods of Gipuzkoa. (Level of agreement)
Q38. Given the environmental impact of data centers, should stricter energy sustainability criteria be established for AI use?
Q39. Should specific measures be adopted to reconcile AI development with environmental sustainability?
Q40. Have you ever used an automated chat assistance service to communicate with a public administration (chatbot)?
Q41. In public services, which type of attention is better? (Face-to-face / Chatbots / Combination / No difference / Don’t know)
Q42. Should document and administrative procedures management be automated in public administrations?
Q43. Should taxation and tax management be automated (calculating taxes, payments, fraud detection)?
Q44. How will AI transform the future of industries? (Multiple choice)
Q45. AI can help create digital communities connecting people with common interests. (Evaluation scale)
Q46. AI can help reduce loneliness through virtual interlocutors or support systems. (Evaluation scale)
Q47. What is the highest level of education you have completed?
Q48. What is your current level of knowledge of the Basque language (Euskera)?
Q49. What is your current employment situation?
Q50. Approximately how much time per day do you use digital devices (mobile phone, computer, tablet, smartwatch, etc.)?
Q51. Approximately which range corresponds to your monthly income?

References

  1. Zuiderwijk, A.; Chen, Y.-C.; Salem, F. Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
  2. Giest, S.; McBride, K.; Nikiforova, A.; Sikder, S.K. Digital and data-driven transformations in governance: A landscape review. Data Policy 2025, 7, e21. [Google Scholar] [CrossRef]
  3. Katzenbach, C.; Ulbricht, L. Algorithmic governance. Internet Policy Rev. 2019, 8. [Google Scholar] [CrossRef]
  4. Dunleavy, P.; Margetts, H. Data science, artificial intelligence and the third wave of digital era governance. Public Policy Adm. 2025, 40, 185–214. [Google Scholar] [CrossRef]
  5. Braithwaite, V. Beyond the bubble that is Robodebt: How governments that lose integrity threaten democracy. Aust. J. Soc. Issues 2020, 55, 242–259. [Google Scholar] [CrossRef]
  6. de Fine Licht, K.; Folland, A. AI in public decision-making: A philosophical and practical framework for assessing and weighing harm and benefit. Public Adm. 2025, 1–15. [Google Scholar] [CrossRef]
  7. Rinta-Kahila, T.; Someh, I.; Gillespie, N.; Indulska, M.; Gregor, S. Managing unintended consequences of algorithmic decision-making: The case of Robodebt. J. Inf. Technol. Teach. Cases 2024, 14, 165–171. [Google Scholar] [CrossRef]
  8. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016, 3, 2053951715622512. [Google Scholar] [CrossRef]
  9. Mergel, I.; Dickinson, H.; Stenvall, J.; Gasco, M. Implementing AI in the public sector. Public Manag. Rev. 2023, 1–14. [Google Scholar] [CrossRef]
  10. Dickinson, H.; Yates, S. From external provision to technological outsourcing: Lessons for public sector automation from the outsourcing literature. Public Manag. Rev. 2023, 25, 243–261. [Google Scholar] [CrossRef]
  11. Rachovitsa, A.; Johann, N. The human rights implications of the use of AI in the digital welfare state: Lessons learned from the Dutch SyRI case. Hum. Rights Law Rev. 2022, 22, ngac010. [Google Scholar] [CrossRef]
  12. Caragliu, A.; Del Bo, C.F. Regional institutions and the urban digital divide. Pap. Reg. Sci. 2025, 104, 100118. [Google Scholar] [CrossRef]
  13. Caragliu, A.; Mora, L.; Appio, F. AI governance, smart urban futures and territorial innovation systems. Reg. Stud. 2025, 59, 77–94. [Google Scholar]
  14. Fuerth, L.S.; Faber, E.M.H. Anticipatory governance: Practical upgrades. Issues Sci. Technol. 2012, 28, 44–50. [Google Scholar]
  15. OECD. Strategic Foresight for Better Policies: Building Effective Governance in the Face of Uncertainty; OECD Publishing: Paris, France, 2020. [Google Scholar]
  16. Mazzucato, M. The Entrepreneurial State: Debunking Public vs. Private Sector Myths; Anthem Press: London, UK, 2018. [Google Scholar]
  17. Mazzucato, M.; Kattel, R. Market-Shaping States: A New Theory of Public Sector Capacities and Capabilities; University College London: London, UK, 2026; Available online: https://www.ucl.ac.uk/bartlett/publications/2026/jan/market-shaping-states-new-theory-public-sector-capacities-and-capabilities.
  18. Bolton, M.; Mintrom, M. RegTech and creating public value: Opportunities and challenges. Policy Des. Pract. 2023, 6, 266–282. [Google Scholar] [CrossRef]
  19. Innobasque; Diputación Foral de Gipuzkoa. Guía para la Creación de Iniciativas Digitales sin Brechas; Diputación Foral de Gipuzkoa: Donostia–San Sebastián, Spain, 2023. [Google Scholar]
  20. Calzada, I.; Eizaguirre, I. Anticipatory AI governance in practice: Data sovereignty, urban AI, and trustworthy GenAI in the Basque Country. In Proceedings of the 2025 IEEE International Conference on Agentic AI (ICA), Wuhan, China, 5–7 December 2025; pp. 248–251. [Google Scholar] [CrossRef]
  21. Lewin, K. Action research and minority problems. J. Soc. Issues 1946, 2, 34–46. [Google Scholar] [CrossRef]
  22. Adib-Moghaddam, A. The Myth of Good AI: A Manifesto for Critical Artificial Intelligence; AI Futures, 2024. [Google Scholar]
  23. Calzada, I.; Eizaguirre, I. Digital inclusion and urban AI: Strategic roadmapping and policy challenges. Discover Cities 2025, 2, 73. [Google Scholar] [CrossRef]
  24. Calzada, I. Datafied Democracies and AI Economies Unplugged; Springer Nature: Cham, Switzerland, 2025. [Google Scholar]
  25. European Commission. Apply AI Strategy: Speeding up AI Adoption in Key Sectors across Europe; European Commission: Brussels, Belgium, 2025. [Google Scholar]
  26. European Commission; Joint Research Centre. Future Directions for Quantum Technology in Europe: An Analysis of Policy Questions; Publications Office of the European Union: Luxembourg, 2025. [Google Scholar] [CrossRef]
  27. European Commission; Directorate-General for Communications Networks; Content and Technology. Study on the Next Data Frontier: Generative AI, Regulatory Compliance and International Dimensions (No. 2024-020); Publications Office of the European Union: Luxembourg, 2025. [Google Scholar] [CrossRef]
  28. European Commission; Joint Research Centre. TechDispatch: Human Oversight of Automated Decision-Making; Publications Office of the European Union: Luxembourg, 2025. [Google Scholar] [CrossRef]
  29. Calzada, I. How do small nations cooperate? An action research framework for Wales and the Basque Country. Reg. Stud. Reg. Sci. 2024, 11, 87–102. [Google Scholar] [CrossRef]
  30. IBM; Basque Government. The Basque Government and IBM inaugurate Europe’s first IBM Quantum System Two in Donostia/San Sebastián. IBM Newsroom 2025. 14 October. Available online: https://newsroom.ibm.com/2025-10-14-the-basque-government-and-ibm-inaugurate-europes-first-ibm-quantum-system-two-in-donostia-san-sebastian (accessed on 19 October 2025).
  31. Kuziemski, M.; Misuraca, G. AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecomm. Policy 2020, 44, 101976. [Google Scholar] [CrossRef]
  32. Belmonte, B.; Villa, D.P.; Ardaiz, I.; Goia, N.; Valenciano, A.M. Democracy in the Digital Era: Digital Technologies and Democracy—An Analysis from the Perspective of Arantzazulab’s Laboratory Activities; Arantzazulab: Donostia–San Sebastián, Spain, 2025. [Google Scholar]
  33. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  34. SAGE Handbook of Mixed Methods in Social & Behavioral Research, 2nd ed.; Tashakkori, A., Teddlie, C., Eds.; SAGE: Thousand Oaks, CA, USA, 2010. [Google Scholar]
  35. Cugurullo, F. AIdeology: Unpacking the ideology of artificial intelligence and its spaces. Antipode 2025. [Google Scholar] [CrossRef]
  36. Tubaro, P. The dual footprint of artificial intelligence: Environmental and social impacts across the globe. Environ. Sci. Policy 2025, 174, 104267. [Google Scholar] [CrossRef]
  37. O’Connor, R.; Bolton, M.; Saeri, A.K.; Chan, T.; Pearson, R. Artificial intelligence and complex sustainability policy problems: Translating promise into practice. Policy Des. Pract. 2024, 1–16. [Google Scholar] [CrossRef]
  38. Vasilopoulou, S.; Almeida, M.; Chiva, C.; Boda, Z.; Campos, A.S.; Falanga, R.; Stasavage, D.; Weimer, M. Future Challenges to Democracy; Publications Office of the European Union: Luxembourg, 2026. [Google Scholar]
  39. Coeckelbergh, M.; Sætra, H.S. Climate change and the political pathways of AI: The technocracy-democracy dilemma in light of artificial intelligence and human agency. Technol. Soc. 2023, 75, 102406. [Google Scholar] [CrossRef]
  40. Barasa, H.; Tay, P.; McBride, K.; Iosad, A.; Mökander, J. Sovereignty in the Age of AI: Strategic Choices, Structural Dependencies and the Long Game Ahead; Tony Blair Institute for Global Change: London, UK, 2026. [Google Scholar]
  41. Hawkins, Z.J.; Razavi, R.; Hodgman, M.; Weaver, J.; Lehdonvirta, V.; et al. From AI Sovereignty to AI Agency: Measuring Capability, Agency and Power: A Practical Tool for Policymakers; Tech Policy Design Institute: London, UK, 2025. [Google Scholar]
  42. Bolton, M. Transforming governance: Critical questions to guide public sector engagement with AI forthcoming. Data Policy, 2026. [Google Scholar]
  43. Albrengues, A.; Lu, L. Weight of gender in artificial intelligence models’ implementation in the European Union non-discrimination laws. Law Ethics Technol. 2025, 2025, 0007. [Google Scholar] [CrossRef]
  44. Tang, S.; Zhu, H. Mitigating bias in generative AI: A comprehensive framework for governance and accountability. Law Ethics Technol. 2024, 2024, 0008. [Google Scholar] [CrossRef]
  45. Zhang, C. An environmental understanding of privacy and data protection law. Law Ethics Technol. 2024, 2024, 0005. [Google Scholar] [CrossRef]
  46. Sieber, R.; Brandusescu, A.; van Geuns, J. Building AI Governance in Municipalities from the Ground Up; University of Toronto, School of Cities: Toronto, ON, Canada, 2026. [Google Scholar]
  47. Miller, R. Transforming the Future: Anticipation in the 21st Century; UNESCO Publishing: Paris, France, 2018. [Google Scholar]
  48. Purificato, E.; Bili, D.; Jungnickel, R.; Ruiz-Serra, V.; Fabiani, J.; et al. The Role of Artificial Intelligence in Scientific Research: A Science for Policy, European Perspective; Publications Office of the European Union: Luxembourg, 2025. [Google Scholar] [CrossRef]
  49. Cugurullo, F.; Xu, Y. When AIs become oracles: Generative artificial intelligence, anticipatory urban governance, and the future of cities. Policy Soc. 2025, 44, 98–115. [Google Scholar] [CrossRef]
  50. Sætra, H.S. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technol. Soc. 2020, 62, 101283. [Google Scholar] [CrossRef]
  51. Lee-Geiller, S. Integrating civic and artificial intelligence in policymaking: Experimental insights on public policy evaluations. Policy Internet 2025, 17, 1–24. [Google Scholar] [CrossRef]
  52. Ostrom, E. Crossing the great divide: Coproduction, synergy, and development. World Dev. 1996, 24, 1073–1087. [Google Scholar] [CrossRef]
  53. Eaves, D.; Rao, K. Digital Public Infrastructure: A Framework for Conceptualisation and Measurement  . In IIPP Working Paper 2025–01; UCL Institute for Innovation and Public Purpose: London, UK, 2025. [Google Scholar]
  54. OECD. Digital Public Infrastructure for Digital Governments; OECD Public Governance Policy Papers No. 68; OECD Publishing: Paris, France, 2024. [Google Scholar]
  55. Verhulst, S.G.; Chafetz, H.; Zahuranec, A. Data Commons in an Era of AI: Rethinking Data Access and Re-Use; SSRN: 2025. Available online: https://ssrn.com/abstract=4836354 (accessed on 19 October 2025).
  56. Indigenous Data Sovereignty and Policy; Walter, M., Kukutai, T., Carroll, S.R., Rodriguez-Lonebear, D., Eds.; Routledge: London, UK; New York, NY, USA, 2021. [Google Scholar]
  57. Aghion, P.; Antonin, C.; Bunel, S. The Power of Creative Destruction: Economic Upheaval and the Wealth of Nations; Harvard University Press: Cambridge, MA, USA, 2021. [Google Scholar]
  58. The Economics of Creative Destruction: New Research on Themes from Aghion and Howitt; Akcigit, U., Van Reenen, J., Eds.; Harvard University Press: Cambridge, MA, USA, 2023. [Google Scholar]
  59. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. The labor market impact of ChatGPT: Evidence from online platforms. Sci. Adv. 2024, 10, eaaz1234. [Google Scholar]
  60. Albizu, M.; Estensoro, M. Mind the AI Gap: Bridging the AI Talent Mismatch in Education and Industry  . In Orkestra Policy Briefs 01/2026; Orkestra, 2026. [Google Scholar] [CrossRef]
  61. International Monetary Fund. Gen-AI: Artificial Intelligence and the Future of Work; IMF Staff Discussion Note SDN/2024/001. Gen-AI: Artificial Intelligence and the Future of Work; International Monetary Fund: Washington, DC, USA, 2024. [Google Scholar]
  62. Jaumotte, F.; Kim, J.; Koll, D.; Li, E.Z.; Li, L.; et al. Bridging Skill Gaps for the Future: New Jobs Creation in the AI Age; IMF Staff Discussion Note SDN/2026/001; International Monetary Fund: Washington, DC, USA, 2026; ISBN 979-8-22902-819-6. [Google Scholar]
  63. Velasco, L.; Adan, S.N.; Khan, M.S.; Fox, J.; Corona, R.; Adeleke, F.; Effoduh, J.O.; Eder, L.; Sharp, M.; Muhaj, D.; Kalcic, K.; Trager, R. Financing the AI Triad: Compute, Data and Algorithms. A Framework to Build Local Ecosystems  . In Oxford Martin AI Governance Initiative; University of Oxford: Oxford, UK, 2026. [Google Scholar]
  64. Calzada, I.; Eizaguirre, I. EcoTechnoPolitics: Towards planetary thinking beyond digital–green twin transitions. Societies 2026, 16, 57. [Google Scholar] [CrossRef]
  65. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef]
  66. Luccioni, A.S.; Strubell, E.; Crawford, K. From efficiency gains to rebound effects: The problem of Jevons’ paradox in AI’s polarized environmental debate. arXiv 2025. [Google Scholar] [CrossRef]
  67. Wang, C.; Yin, Y.; Hu, H. The rise of algorithmic governance and the dual revolution: Applications, challenges, and governance of artificial intelligence in public administration. Technol. Soc. 2026, 86, 103264. [Google Scholar] [CrossRef]
  68. Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. J. Strateg. Inf. Syst. 2025, 34, 101885. [Google Scholar] [CrossRef]
  69. Raieste, A.; Solvak, M.; Velsberg, O.; McBride, K. Government Efficiency in the Age of AI: Toward Resilient and Efficient Digital Democracies; Nortal: Tallinn, Estonia, 2025. [Google Scholar]
  70. Ozili, P.K. Digital public infrastructure: Concepts, global efforts, benefits, challenges, and success stories. Digit. Soc. 2025, 4, 1–22. [Google Scholar] [CrossRef]
  71. United Nations Development Programme (UNDP). Accelerating the SDGs through Digital Public Infrastructure: A Compendium of the Potential of Digital Public Infrastructure; UNDP: New York, NY, USA, 2023. [Google Scholar]
  72. Mazarr, M.J. A New Age of Nations: Power and Advantage in the AI Era; RAND Corporation: Santa Monica, CA, USA, 2026. [Google Scholar]
  73. Kerche, F.W.; Zook, M.; Graham, M. The silicon gaze: A typology of biases and inequality in LLMs through the lens of place. Platforms Soc. 2026, 3, 1–20. [Google Scholar] [CrossRef]
  74. Levy, H. Ethical, legal, and governance dimensions of responsible research and innovation: Global perspectives and challenges in emerging technologies. Law Ethics Technol. 2025, 2025, 0012. [Google Scholar] [CrossRef]
  75. Calzada, I. Data sovereignties in the GenAI age: From data-opolies to data cooperatives, trust, and geopolitical governance. In Springer Proceedings on Complexity; Springer: Cham, Switzerland, 2026; Available online: https://ssrn.com/abstract=5453496. [CrossRef]
  76. International Monetary Fund. Artificial Intelligence and the Future of Work: Macroeconomic Implications; IMF Staff Discussion Note; International Monetary Fund: Washington, DC, USA, 2024. [Google Scholar]
  77. World Economic Forum. Rethinking AI Sovereignty: Pathways to Competitiveness through Strategic Investments; World Economic Forum: Cologny/Geneva, Switzerland, 2026. [Google Scholar]
  78. Calzada, I. Understanding AI Economics; Edward Elgar: Cheltenham, UK, 2026. [Google Scholar]
  79. Barac, M.; López Rodríguez, M.I. Geopolítica digital: política de regulación y su importancia en inteligencia artificial en EE. UU., China y la UE. Int. Rev. Econ. Policy 2025, 7, 77–100. [Google Scholar] [CrossRef]
  80. Haider, J.; Rödl, M. Google Search and the creation of ignorance: The case of the climate crisis. Big Data Soc. 2023, 10, 1–12. [Google Scholar] [CrossRef]
  81. Chatterji, A.; Cunningham, T.; Deming, D.J.; Hitzig, Z.; Ong, C.; et al. How People Use ChatGPT; National Bureau of Economic Research: Cambridge, MA, USA, 2025; Available online: http://www.nber.org/papers/w34255.
  82. Han, J.; Qiu, W.; Lichtfouse, E. ChatGPT in Scientific Research and Writing: A Beginner’s Guide; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  83. Kennedy, B.; Yam, E.; Kikuchi, E.; Pula, I.; Fuentes, J. How Americans View AI and Its Impact on People and Society; Pew Research Center: Washington, DC, USA, 2025. [Google Scholar]
  84. Li, Z.; Wan, X. Ethical challenges and innovations in AI-driven predictive policing: The case of China. Law Ethics Technol. 2025, 2025, 0005. [Google Scholar] [CrossRef]
  85. International Telecommunication Union. AI Standards for Global Impact: From Governance to Action (2025 Report); ITU Publications: Geneva, Switzerland, 2025. [Google Scholar]
  86. International Telecommunication Union. AI for Good Global Summit 2025: International AI Standards Exchange Report; ITU: Geneva, Switzerland, 2025. [Google Scholar]
  87. OECD. The OECD.AI Index: Technical Paper; OECD Publishing: Paris, France, 2026. [Google Scholar]
  88. OECD. Explanatory Memorandum on the Updated OECD Definition of an AI System; OECD Publishing: Paris, France, 2024; Available online: https://www.oecd.org/en/publications/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en.html.
  89. OECD; UNESCO. G7 Toolkit for Artificial Intelligence in the Public Sector; OECD Publishing: Paris, France, 2024. [Google Scholar]
  90. Barrett, A.; et al. The Multiple Streams Framework: A lens for understanding policy change. Policy Politics 2026, 54, 1–18. [Google Scholar]
  91. Bolton, M. What influences public decision-makers? An Australian case study. Aust. J. Public Adm. 2024, 83, 457–474. [Google Scholar] [CrossRef]
  92. Ilves, L.; Kilian, M.; Parazzoli, S.M.; Peixoto, T.C.; Velsberg, O. The Agentic State: Rethinking Government for the Era of Agentic AI; Global Government Technology Centre & The World Bank: Berlin, Germany, 2025. [Google Scholar]
  93. Ilves, L.; Kilian, M.; Peixoto, T.C.; Velsberg, O. The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Government and Public Administration; Global Government Technology Centre: Berlin, Germany, 2025. [Google Scholar]
  94. Calzada, I. Digital infrastructures of democracy. In Oxford Research Encyclopedia of Science, Technology, and Society; Fouché, R., Ed.; Oxford University Press: New York, NY, USA, 2026. [Google Scholar] [CrossRef]
  95. Chawla, R.; Iyer, A. Foundations of Digital Public Infrastructure; RIS: New Delhi, India, 2025; ISBN 81-7122-190-4. [Google Scholar]
  96. Clark, J.; Marin, G.; Ardic Alper, O.P.; Galicia Rabadan, G.A. Digital Public Infrastructure and Development: A World Bank Group Approach  . In Digital Transformation White Paper; World Bank: Washington, DC, USA, 2025; Vol. 1. [Google Scholar]
  97. Partnership, Access; Digital Cooperation Organization (DCO). Digital Public Infrastructure: A Key Building Block for Social Inclusion and Economic Development; DCO: Riyadh, Saudi Arabia, 2024. [Google Scholar]
  98. Ford, C.; Dell’Aquila, M.; Grabova, O.; Muñoz, I.; Renda, A. Building the European Digital Public Infrastructure: Rationale, Options, and Roadmap; CEPS In-Depth Analysis; CEPS: Brussels, Belgium, March 2025. [Google Scholar]
  99. Fountain, J. Public Sector Digital Infrastructure: Concepts, Measurement, and Frameworks; Working Paper No. 2025-04; UCL Institute for Innovation and Public Purpose: London, UK, 2025. [Google Scholar]
  100. Machen, R.; Nost, E. Thinking algorithmically: The making of hegemonic knowledge in climate governance. Trans. Inst. Br. Geogr. 2021, 46, 555–569. [Google Scholar] [CrossRef]
  101. Meneghin, G.; Stefani, S. Energy efficiency and social justice: The European challenge of the Energy Performance of Buildings Directive. Renew. Sust. Energy 2026, 2026, 0001. [Google Scholar] [CrossRef]
  102. Manor, I. What ChatGPT thinks about your country: Sentiments and frames of AI geographies. Policy Internet 2025, 17, 201–225. [Google Scholar] [CrossRef]
  103. Daly, A.; Hagendorff, T.; Hui, L.; Mann, M.; Marda, V.; et al. Artificial Intelligence Governance and Ethics: Global Perspectives; The Chinese University of Hong Kong, Faculty of Law: Hong Kong, China, 2019. [Google Scholar]
  104. UNESCO. Companion Document: The Guidelines for the Governance of Digital Platforms and Generative Artificial Intelligence; UNESCO: Paris, France, 2025. [Google Scholar]
  105. Lee-Geiller, S. Integrating Civic and Artificial Intelligence in Policymaking: Experimental Insights on Public Policy Evaluations; SSRN Working Paper No. 5063392; Yale University Institution for Social and Policy Studies: New Haven, CT, USA, 2024. [Google Scholar]
  106. Kasula, P.; Dedekorkut-Howes, A.; Shearer, H.; Baum, S. Social inclusion of urban villages: A systematic review of global urban planning practices. Cities 2026, 169, 106509. [Google Scholar] [CrossRef]
  107. Calzada, I. Human–AI governance through innovation systems: Digital sovereignty in the Basque Country’s healthcare system. In Springer Proceedings on Complexity; Springer: Cham, Switzerland, 2026; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5931987.
  108. Fullerton, J.B. Regenerative Economics: Revolutionary Thinking for a World in Crisis; Capital Institute: Great Barrington, MA, USA, 2015. [Google Scholar]
  109. Dong, G. Environmental epidemiology in environmental health research: Opportunities and challenges for a new era. Int. J. Environ. Epidemiol. 2026, 2026, 0001. [Google Scholar] [CrossRef]
  110. Gaventa, J. Finding the spaces for change: A power analysis. IDS Bull. 2006, 37, 23–33. [Google Scholar] [CrossRef]
  111. Calzada, I.; Eizaguirre, I. Anticipatory AI governance in public administrations worldwide: Digital inclusion, territorial innovation, and datafied democracies. In Proceedings of the AAG 2026 Annual Meeting of the American Association of Geographers, San Francisco, CA, USA, 17–21 March 2026. [Google Scholar]
  112. Democratising AI: Towards Open, Decentralised AI Ecosystems; Chandola, B., Sarma, A., Eds.; Observer Research Foundation: New Delhi, India, 2026. [Google Scholar]
  113. Fountain, J.E. The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. Gov. Inf. Q. 2022, 39, 101645. [Google Scholar] [CrossRef]
  114. Leone de Castris, A. AI Governance around the World: European Union; The Alan Turing Institute: London, UK, 2025. [Google Scholar]
  115. Sharma, S.; Ramanathan, M.; Iyer, A.; Abraham, V. Digital Public Infrastructures: Lessons from India for a Thriving Data Economy; IE Center for the Governance of Change/iSPIRT Foundation: Madrid, Spain, 2023. [Google Scholar]
  116. Xu, C.; Munday, M.; Jones, C. Can an ICT satellite account help us to understand digital sovereignty? Econ. Syst. Res. 2025. [Google Scholar] [CrossRef]
  117. Hope, J.; Ludlow, P. Farewell to Westphalia: Crypto Sovereignty and Post-Nation-State Governance; Logos Press Engine: Zug, Switzerland, 2025. [Google Scholar]
  118. Calzada, I.; Garaikoetxea, A. Anticipating trustworthy GenAI in the public healthcare system: Co-producing human–AI governance between patients and GPs in the Basque Country through living lab assemblages. In Human-Centric AI: Harmonizing Humans and Technology; Misra, S., Traymbak, S., Chockalingam, S., Kjølerbakken, K.M., Braarud, P.Ø., Eds.; Routledge: Oxon, UK, 2026; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5932177.
  119. Calzada, I.; Eizaguirre, I. Adimen Artifizialaren Gizarte Inpaktua Gipuzkoan  . Berria. 7th March 2026. Available online: https://www.berria.eus/iritzia/artikuluak/adimen-artifizialaren-gizarte-inpaktua-gipuzkoan_2154966_102.html.
Figure 1. Gipuzkoa Province as a Devolved Territorial Setting. (Basque Country, Spain). Source: Calzada & Eizaguirre, 2026.
Figure 1. Gipuzkoa Province as a Devolved Territorial Setting. (Basque Country, Spain). Source: Calzada & Eizaguirre, 2026.
Preprints 202305 g001
Figure 2. IBM Quantum System Two at the IBM-Euskadi Quantum Computational Centre, Donostia-San Sebastián (Basque Country, Spain). Source: IBM, 2025. Available online: https://newsroom.ibm.com/2025-10-14-the-basque-government-and-ibm-inaugurate-europes-first-ibm-quantum-system-two-in-donostia-san-sebastian (Accessed on 1st March 2026).
Figure 2. IBM Quantum System Two at the IBM-Euskadi Quantum Computational Centre, Donostia-San Sebastián (Basque Country, Spain). Source: IBM, 2025. Available online: https://newsroom.ibm.com/2025-10-14-the-basque-government-and-ibm-inaugurate-europes-first-ibm-quantum-system-two-in-donostia-san-sebastian (Accessed on 1st March 2026).
Preprints 202305 g002
Figure 3. Six NGOs Aniztasunaren Sarea (Diversity Network) Workshop Co-Producing the Digital Inclusion & Generative AI Decalogue. Source: [23].
Figure 3. Six NGOs Aniztasunaren Sarea (Diversity Network) Workshop Co-Producing the Digital Inclusion & Generative AI Decalogue. Source: [23].
Preprints 202305 g003
Figure 4. UIK Summer School on Digital Inclusion & Generative AI Culminated the Decalogue. Source: [23].
Figure 4. UIK Summer School on Digital Inclusion & Generative AI Culminated the Decalogue. Source: [23].
Preprints 202305 g004
Figure 5. Decalogue.
Figure 5. Decalogue.
Preprints 202305 g005
Figure 6. Eleven Municipalities co-producing the Decalogue. Source: [23].
Figure 6. Eleven Municipalities co-producing the Decalogue. Source: [23].
Preprints 202305 g006
Table 1. Mixed Methods: Multistakeholder Framework.
Table 1. Mixed Methods: Multistakeholder Framework.
Method Technique Multistakeholder Framework Number
Qualitative Action Research (via a Decalogue) CSO/NGOs 6
Directorates 7
Municipalities 11
Quantitative Online Survey Citizens 911
Table 3. Six CSO/NGOs: Decalogue.
Table 3. Six CSO/NGOs: Decalogue.
DIGITAL DIVIDE ALGORITHMIC BIAS DATA DIVIDE DIGITAL FUTURES
NGO 1.
Access & Obstacles
2.
Digital Literacy & Autonomy
3. Devices, Mobile & Connectivity 4. Experience of Algorithmic Discrimination 5. Representation in Digital Spaces 6.
GenAI (Use & Perception)
7.
Data & Control
8. Statistical Invisibility 9. Digital Rights Incidents / Policies 10. Vision: Just Digital Futures
Jatorkin Migrants face language barriers in digital public services. Need for multilingual digital training and support. Strong reliance on smartphones due to affordability constraints. Automated systems sometimes reject documents or identity verification. Migrants often invisible in institutional digital interfaces, i.e., women. Interest in GenAI for translation and administrative help. Concerns about how migration data are used. Migrant communities underrepresented in official statistics. Few mechanisms to contest automated decisions. Multilingual inclusive public digital services.
BidezBide Refugees depend on NGOs to access digital administrative services. Limited autonomy due to legal and documentation barriers. Shared devices and unstable connectivity common. Administrative algorithms may produce automatic rejections. Refugees rarely represented in digital governance narratives. GenAI perceived useful for translation and orientation. Concerns about data surveillance and personal records. Refugees frequently absent from policy datasets. Lack of legal clarity around digital rights protections. Rights-based digital services ensuring dignity and accessibility.
Haurralde Women in vulnerable contexts face multiple barriers to digital access. Digital literacy linked to empowerment and participation. Household device sharing limits autonomy. Online harassment and gender discrimination occur in digital platforms. Women underrepresented in digital governance debates. Interest in GenAI for education and employment opportunities. Concerns regarding gender data gaps. Limited gender-disaggregated data in digital policy. Weak institutional responses to online harassment. Feminist digital infrastructures prioritising safety and empowerment.
Elkartu Accessibility barriers remain in digital public services. Need for accessible training and assistive technologies. Assistive technologies often expensive or incompatible. Automated systems sometimes ignore accessibility needs. Disabled users excluded from digital design processes. GenAI may enhance accessibility but also risk bias. Need for accessible data governance frameworks. Disability statistics often incomplete. Social media content containing hate speech. Universal design embedded in digital public services.
Agifugi Gypsy community seekers face connectivity barriers due to unstable housing. Literacy challenges linked to disrupted education trajectories. Dependence on low-cost mobile connectivity. Automated identity checks may trigger suspicion. Roma community invisible in digital policy debates. GenAI useful for legal information and translation. Concerns about biometric data governance. Gypsy community often absent in official datasets. Limited ways to report digital harms. Transparent digital governance and rights protection.
Emigrados Sin Fronteras (ESF) Infrastructure inequalities drive digital exclusion. Digital identity as key element through BakQ. Community-based digital literacy programmes needed. Territorial connectivity gaps persist. Algorithmic bias reflects structural inequalities in datasets. Global South and marginalised perspectives underrepresented. GenAI requires ethical governance frameworks. Data governance should prioritise public interest. Structural invisibility of vulnerable groups in official data. Need for accountability frameworks for digital harms. Community-centred digital ecosystems based on social justice.
Table 4. Seven Directorates: Decalogue.
Table 4. Seven Directorates: Decalogue.
DIGITAL
DIVIDE
ALGORITHMIC
BIAS
DATA DIVIDE DIGITAL FUTURES
Directorate 1. Access & Obstacles 2. Digital Literacy & Autonomy 3. Devices, Mobile & Connectivity 4. Algorithmic Discrimination 5. Representation in Digital Spaces 6. GenAI (Use & Perception) 7. Data & Control 8. Statistical Invisibility 9. Digital Rights Incidents / Policies 10. Vision: Just Digital Futures
Tax Directorate Reduce barriers to digital tax services (language/age/low skills) via assistance tools. Plain-language explanations + training to improve “fiscal autonomy.” Design GenAI services mobile-first and universally accessible. Avoid automated “risk profiling” that stigmatizes groups; independent ethical audits. Diversify internal teams (gender/language/origin) to reduce bias in design/data.
1Million invoices delivered through TiketBai system being assisted by Multiverse Computing
Communicate clearly: “AI assists, does not decide”; manage perceptions of AI use. Data sovereignty architecture; no secondary use without clear authorization. Expand/repair representativeness (e.g., migrants/older adults) in datasets. Use Algorithmic Impact Assessments + AI Ethics Charter in taxation domain. “AI for Tax Justice”: efficiency plus rights plus social trust.
Road Directorate:
Cross-border / Data Spaces & Sensing
Access barriers framed around cross-border service interoperability and “usable” interfaces for citizens in the border context. Need for literacy/autonomy to understand data-driven services and cross-border digital processes. Connectivity & device constraints matter where sensing infrastructures (IoT) and services meet citizens.
Ps: MUGI data availability could be a boost, an opportunity.
Risk that automated classifications in border/security/mobility contexts reproduce bias if not governed. Who is “seen” by border data infrastructures; representation risks for cross-border users. GenAI seen as useful (support/automation), but requires governance and bounded use cases. Data spaces governance is central: roles, ownership, permissions across institutions. “Invisible” populations can be missed if cross-border data standards/datasets exclude them. Need formal channels/protocols for incidents in sensor/data-space environments. “Trusted cross-border data spaces” that remain rights-based and inclusion-oriented.
Mobility / MUBIL–Landago Inclusion depends on ensuring mobility-related digital services don’t exclude low-access users. Literacy/autonomy needed for users and staff to navigate mobility platforms and data-driven services. Reliance on mobile access is treated as a design constraint in mobility/service delivery. Automated decisions in mobility/logistics require safeguards against biased routing/priority outcomes. Representation concerns in how mobility data reflects different communities/territories. GenAI potential for internal knowledge/support functions; needs clear boundaries. Strong emphasis on data governance arrangements across partners/ecosystem.
Ps: key question, what to do and who should do what with this date. Multistakeholder arrangement is required
Territorial “blind spots” in mobility data can undermine policy design. Governance routines needed for harms/complaints when systems affect rights/access. Place-based, sustainable mobility innovation aligned with inclusion and public value considering multistakeholder frameworks.
Open Governance Digital access barriers framed as democratic/participation obstacles in public services. Literacy as civic capability: understanding, navigating, contesting digital procedures. Device/connectivity constraints treated as structural inequalities affecting participation. Concern with institutional accountability if automated processes reduce contestability. Representation as democratic legitimacy: whose voices/data shape institutional systems. GenAI adoption should be aligned with transparency and public accountability norms. Data control linked to governance-by-design and institutional responsibility. Missing data on excluded groups undermines evidence-based public policy. Need clear accountability/oversight mechanisms for digital harms and failures. Democratic innovation agenda: digital transformation that strengthens legitimacy and trust.
Environment / Territorial governance: EcoTechnoPolitical challenge Access obstacles understood via service design and territorial heterogeneity (who can actually use digital channels). Literacy/autonomy relevant for understanding complex environmental info/services and procedures. Connectivity/device dependence matters in territorial/environmental service contexts. Algorithmic bias risks in classification/assessment tools if datasets or proxies are skewed. Representation issues: which territories/groups are made visible in environmental data. GenAI usefulness acknowledged, but needs governance, constraints, and verification. Data governance stressed (sharing, stewardship, legality) in environmental data ecosystems: Data Cooperatives as an opportunity “Invisibility” arises when monitoring/data collection under-captures some areas/groups. Need policies for incident response and safeguards where automated outputs affect rights/services. Anticipatory, rights-aware digital futures integrated with sustainability aims.
Gender Equality Women constitute a structurally vulnerable population in increasingly digitised private and public spheres, including exposure in social media environments. Digital literacy and autonomy are essential for women to access services independently and participate in digital governance processes. Mobile-first digital realities require services that do not assume access to high-end devices or stable infrastructures. Algorithmic bias in AI systems may reproduce gender stereotypes and structural inequalities if governance mechanisms are not implemented. Persistent concerns regarding the representation and visibility of women in digital interfaces, technological sectors, and AI datasets. Generative AI is perceived as a promising tool but requires guidance, ethical safeguards, and gender-sensitive governance frameworks. Data governance must ensure control, privacy, and responsible reuse of gender-related datasets. Lack of gender-disaggregated data limits the ability of institutions to design inclusive and evidence-based policies. Online harassment and gender-based digital violence (including social media behaviour) remain significant governance challenges. A feminist and inclusive digital transition where AI governance integrates equality, transparency, and democratic accountability.
Administration Modernization Access improved through conversational interfaces that reduce friction (if designed for inclusion). Literacy support: chatbots can guide procedures, but must avoid over-reliance and provide alternatives. Human-AI Governance (HAIG) Works well on mobile; must be accessibility-compliant and multilingual where relevant. Chatbot outputs can discriminate or mislead; requires monitoring, testing, and guardrails. Representation: language, disability access, and cultural fit in conversational design. Chatbot as high-impact GenAI use case; needs bounded scope and human oversight. Governance over training data, logs, retention, and vendor dependencies. Interaction logs can reveal “who is missing” and where services fail—if ethically governed. Incident protocols needed (hallucinations, harmful advice, privacy failures). “Trustworthy admin chatbot”: transparency + oversight + inclusion by design.
Table 5. Eleven Municipalities: Decalogue.
Table 5. Eleven Municipalities: Decalogue.
DIGITAL
DIVIDE
ALGORITHMIC
BIAS
DATA
DIVIDE
DIGITAL
FUTURES
Municipality 1. Access & Obstacles 2. Digital Literacy & Autonomy 3. Devices, Mobile & Connectivity 4. Algorithmic Discrimination 5. Representation in Digital Spaces 6. GenAI (Use & Perception) 7. Data & Control 8. Statistical Invisibility 9. Digital Rights Incidents / Policies 10. Vision: Just Digital Futures
Hernani Many residents struggle with digital procedures and often need to visit the town hall in person for assistance. Citizens frequently require support to complete digital processes, indicating limited autonomy. Older devices and slow internet connections remain common among vulnerable groups. Public digital services are often perceived as not designed for disadvantaged users. Vulnerable populations rarely appear in digital platforms or public communication channels. Generative AI is sometimes perceived as a control tool rather than a support mechanism. Data governance issues are considered secondary within local service priorities.
Hernani is pushing ahead a municipalistic agenda called Hernani Burujabe based digital sovereignty principles [125].
Some communities remain absent from official administrative registers. Limited local guidance exists for handling digital rights issues. Future governance in Hernani will be established a municipalistic agenda imitating Barcelona in 2015-2018.
Zarautz Many citizens still lack basic digital access to public services. Informal mutual support networks help residents navigate digital procedures. Smartphones represent the dominant form of access to digital services. Hate speech and discriminatory discourse appear frequently on digital platforms. Some communities are represented through stigmatizing narratives online. Generative AI adoption remains limited and is often viewed with distrust. Data infrastructures are perceived as distant and difficult for citizens to influence. Certain social groups remain statistically invisible within administrative datasets. Municipal competences regarding digital rights remain unclear.
Is this competence of ICT department or Inclusion department?
Digital inclusion should guide future digital policy development.
Ordizia Digital access varies significantly between neighbourhoods. Low levels of digital literacy hinder effective participation in digital services. Mobile phones constitute the only digital access point for many residents. Structural discrimination may emerge through automated systems and administrative procedures. Persistent stereotypes shape the digital representation of certain communities. Generative AI remains perceived as distant and unfamiliar. Limited knowledge exists regarding how municipal data infrastructures function. Some communities remain outside digital governance processes. Municipal digital rights policies remain insufficiently defined. Ethical reflection should guide future digital governance.
Urretxu Digital services exist but often require support or mediation to be effectively used. Digital literacy varies strongly depending on education level. Connectivity infrastructure is relatively functional but unevenly used. Digital environments are sometimes perceived as potentially risky spaces. Municipal communication campaigns attempt to ensure inclusive representation. Safe and responsible use of AI technologies still requires institutional development. Risks associated with data governance are not always clearly understood. Some groups attempt to increase visibility through community data initiatives. Follow-up mechanisms for digital harms remain limited. Transparency should guide future digital transformation policies.
Arrasate Cultural and community centres function as key access points to digital services. Citizens’ digital autonomy remains very limited without institutional support. Access frequently depends on shared community infrastructures. Algorithmic bias is sometimes perceived in automated systems. Digital infrastructures are not considered neutral by users. Fear and uncertainty dominate perceptions of generative AI technologies. Data governance is rarely prioritised within municipal decision-making. Digital representations of communities may project inaccurate images. Institutional roles in protecting digital rights remain unclear. Policies should aim at fair and inclusive digital governance.
Donostia Inclusion department has not conducted such observation so far. Who should be in charge of digital inclusion? Public awareness of digital risks and rights remains relatively low. Public innovation spaces (e.g., Tabakalera) provide alternative access infrastructures. Online narratives may criminalise vulnerable groups. Negative digital narratives often shape public perceptions. Generative AI is seen as having strong potential but also generating confusion. Data governance sometimes evokes “Big Brother” concerns among citizens. Risks of data manipulation are frequently discussed. Lack of training limits understanding of digital rights. Digital empowerment should guide the development of future governance models.
Errenteria Digital access depends heavily on public spaces and shared infrastructures. Citizens’ autonomy in digital environments remains very limited. Women in particular often rely on outdated devices. Online discourse may reinforce social roles and criminalisation narratives. Hate speech is frequently present in digital spaces. Generative AI is sometimes perceived as exclusionary. Citizens often feel subject to surveillance and control. Lack of integrated data systems limits understanding of social realities. Protection mechanisms for digital harms remain insufficient. More institutional resources are required to ensure inclusive digital governance.
Eibar Many citizens still rely on in-person administrative procedures. Basic digital skills are lacking among parts of the population. Mobile phones represent the only digital access channel for many residents. Online environments are sometimes perceived as spaces of hostility. Some groups remain difficult to identify or represent in digital datasets. Adoption of generative AI is conditional and cautious. Citizens sometimes perceive increasing digital restrictions. Disabilities and accessibility needs are often underrepresented in datasets. Municipal competences regarding digital rights protection are limited. Policies should prioritise digital empowerment.
Pasaia A significant generational digital divide persists. Limited critical engagement with digital systems among some residents. Moderate smartphone usage represents the main access channel. Digital systems may fail to support social integration. Digital representations often fail to reflect local realities. Generative AI remains largely unknown among many citizens. Data use is often perceived as restrictive rather than empowering. Increased visibility may sometimes generate harmful stereotyping. Training on digital rights remains insufficient. Inclusive public spaces should guide digital futures.
Tolosaldea Small municipalities face resource constraints that affect digital inclusion policies.
Ps. Although it contrasts with landago.eus in Goiherri county.
Citizens’ use of digital services remains very limited. Digital connectivity often occurs collectively through shared infrastructures. Housing and employment barriers intersect with digital exclusion. Representation of vulnerable groups remains weak. Generative AI governance is often framed through a logic of institutional control. Citizens often perceive data governance as risky. Rural territories remain statistically invisible in many datasets. Lack of resources limits digital rights policies. A comprehensive territorial digital inclusion policy is needed.
Irun Public libraries play a key role as digital access points. Citizens’ digital autonomy remains limited. Community infrastructures facilitate connectivity. Experiences with digital systems are often mixed. Representation of some communities remains weak. Generative AI is sometimes discussed from a care and social-support perspective. Awareness of digital rights remains low. Experimental initiatives attempt to increase visibility through local data. Concerns about freedom and privacy occasionally arise. Digital coexistence and social cohesion should guide future governance.
Table 6. Online Survey: Methodological Correlation between the Questionnaire and the Decalogue.
Table 6. Online Survey: Methodological Correlation between the Questionnaire and the Decalogue.
Decalogue Block Decalogue Dimension Main
Questionnaire Items
Quantitative Operationalisation
Digital Divide 1. Access & Obstacles Q4, Q21–Q23 Use of AI; perceived facilitation or complication of everyday and administrative tasks
2. Digital Literacy & Autonomy Q10, Q30 Self-reported knowledge of AI; perceived usefulness of AI for youth guidance and training
3. Devices, Mobile & Connectivity Q6–Q9, Q50 Use of AI in learning, work, shopping, and wellbeing contexts; indirect evidence of everyday device-based uptake
Algorithmic Bias 4. Experiences of Digital/Algorithmic Discrimination Q11, Q35 Privacy-risk perception; perception of discrimination and opacity in automated public decisions
5. Representation in Digital Spaces Q24–Q28 Perceptions of gender gaps and uneven representation in AI use
6. Generative AI (Use & Perception) Q4–Q9, Q16 Extent and frequency of AI use; perceived dependency risks
Data Divide 7. Data & Control Q33–Q34 Willingness to share data; views on who should guarantee ethical AI development
8. Statistical Invisibility Q35 Perception that digital bureaucracy excludes part of the population
9. Digital Rights Incidents / Policy Gaps Q35, Q40–Q43 Privacy, transparency, discrimination, big-tech control, and views on automation in public administration
Digital Futures 10. Vision: Just Digital Futures Q36–Q39, Q45–Q46 Support for community AI, sustainable AI, and socially oriented digital futures
Table 7. Broad Quantitative Results of the Citizen Survey Organized through the Decalogue.
Table 7. Broad Quantitative Results of the Citizen Survey Organized through the Decalogue.
Decalogue
Block
Decalogue
Dimension
Q/Indicator Result (%)
1.DIGITAL
DIVIDE
1. Access & Obstacles Q4: Respondents who use or have used AI 72.7
Q23: Respondents saying AI facilitates administrative requirements 35.9
2. Digital Literacy & Autonomy Q10: Respondents reporting good or deep knowledge of AI 46.3
Q2 + Q4: AI use among ages 16–34 89.1
Q2 + Q4: AI use among ages 55+ 52.4
3. Devices, Mobile & Connectivity Q7: Use AI for learning/study at least sometimes 47.4
Q9: Use AI for work at least sometimes 46.8
2. ALGORITHMIC BIAS 4. Digital/Algorithmic Discrimination Q11: Agree that AI creates privacy risks 71.8
Q35: Identify exclusion through digitalised bureaucracy as a public-sector risk 35.3
Q35: Identify lack of transparency/discrimination in automated decisions 46.8
5. Representation in Digital Spaces Q24: Believe AI increases gender gaps 11.5
Q27: Believe men and women use AI similarly in education 58.6
6. Generative AI (Use & Perception) Q5: AI users who use it at least weekly 62.2
Q16: Perceive high or very high dependency risk 43.0
3. DATA
DIVIDE
7. Data & Control Q33: Would share data only with privacy guarantees or anonymisation 47.5
Q33: Would not share personal data 32.5
Q34: Believe governments should guarantee ethical AI 53.7
8. Statistical Invisibility Q35: Perceive that bureaucracy digitalisation excludes part of the population 35.3
9. Digital Rights / Policy Gaps Q40: Have used an automated chatbot to communicate with an administration 42.3
Q41: Prefer face-to-face public-service attention 59.9
Q42: Support automating documents and procedures 63.7
4. DIGITAL
FUTURES
10. Vision: Just Digital Futures Q39: Support specific measures to align AI with environmental sustainability 55.9
Q38: Support stricter sustainability criteria for AI use 51.6
Q29: Believe AI offers more opportunities to young people 60.4
Table 8. Digital Inclusion Index and AI Governance Perception Index.
Table 8. Digital Inclusion Index and AI Governance Perception Index.
Index Dimension Survey Question Indicator Description
Digital Inclusion Index Access to AI Q4 Respondents who use or have used AI tools
Frequency of use Q5 AI users who use AI at least weekly
AI in education Q7 Use of AI tools for learning or study
AI in work Q9 Use of AI tools for work-related activities
AI literacy Q10 Respondents reporting good or deep knowledge of AI
Digital public services Q40 Respondents who have used an automated chatbot to interact with public administration
AI Governance Perception Index Privacy risks Q11 Agreement that AI creates risks for personal data privacy
Data governance Q33 Willingness to share personal data under privacy guarantees or anonymisation
Institutional responsibility Q34 Belief that governments should guarantee ethical AI development
Bureaucratic exclusion Q35 Perception that digitalisation of bureaucracy may exclude part of the population
Algorithmic transparency Q35 Perception of lack of transparency or discrimination in automated decisions
Human oversight Q41 Preference for face-to-face interaction in public services rather than automated systems
Table 9. Digital Inclusion Index and AI Governance Perception Index by Gipuzkoa’s Counties.
Table 9. Digital Inclusion Index and AI Governance Perception Index by Gipuzkoa’s Counties.
County Digital
Inclusion
Index
Rank AI
Governance Perception
Index
Rank
Donostialdea 57.6 1 50.1 1
Oarsoaldea 54.6 2 49.6 2
Debagoiena 54.2 3 49.6 2
Goierri 53.9 4 49.4 4
Bidasoa-Oiartzun 53.5 5 49.1 7
Tolosaldea 53.4 6 49.3 5
Urola Garaia 53.0 7 49.2 6
Urola Kosta 52.8 8 49.1 7
Debabarrena 52.3 9 49.0 9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated