Preprint
Article

Contention in Carbon Accounting in the Digital Industry: The Need to Move towards Decision-Making in Uncertainty

Altmetrics

Downloads

107

Views

61

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

31 January 2024

Posted:

01 February 2024

You are already at the latest version

Alerts
Abstract
In this paper, we present findings from a qualitative interview study, which highlights the difficulties and challenges with quantifying carbon emissions, and discusses how to move productively through these challenges by drawing insights from studies of deep uncertainty. Our research study focuses on the digital sector and was governed by the research question: how do practitioners researching, working or immersed in the broad area of sustainable digitisation (researchers, industry, NGOs, and policy representatives) understand and engage with quantifying carbon? Our findings show how stakeholders struggled to measure carbon emissions across complex systems, the lack of standardisation to assist with this, and how these challenges led stakeholders to call for more data to address this uncertainty. We argue that calls for more data to address this uncertainty obscure the fact that there will always be uncertainty, and that we must learn to govern from within it.
Keywords: 
Subject: Social Sciences  -   Sociology
Climate change is one of the most pressing challenges currently facing society. International and national attempts to help mitigate climate change have focused on accountability practices that require the collection of data to assess carbon emissions, to set goals, and then gauge progress towards these goals. This has meant that climate change is fast becoming ‘a problem of gathering data and acting on that data within the terms set by these modes of calculation’ [1]. This approach has become so central to climate change policies that nowadays such policies both presume and require the systematic quantification of carbon emissions [2]. Implicit in this is the assumption that if we can make carbon emissions visible and accountable through quantification we can better understand and take steps to reduce them [3]. This has reinforced modernist assumptions that place faith in the ability to solve climate change challenges through calculating and then managing carbon emissions [4,5]. As such, we have seen a range of carbon accounting tools and burgeoning guidelines and frameworks to help support individuals, businesses, institutions, and nations calculate carbon emissions.
At the same time, carbon quantification has a range of difficulties and associated challenges meaning that it often provides uncertain answers. While in the broader literature, responses to uncertainty often call for more data (for example see [6]), as we go on to highlight, carbon quantification is not a fact finding mission to some ultimate certain truth, nor a fact finding mission that has a ‘correct’ scientific way to quantify and calculate carbon emissions if only we can collect enough data [7]; calculating carbon does not represent the amount of material carbon ‘out there’. This means that in addressing uncertainty we need to engage with the fact that carbon calculations are just representations of the world [8,9] constructed through socio-technical process, relationships and interactions between actors, organisations, data, information, and policies [1,4,10,11].
In this paper, we explore the socio-technical processes of carbon accounting in the digital sector, and through doing so, question the call for more data to address uncertainty. To do this, we draw on a qualitative interview study designed to explore digital sustainability stakeholder practices in the field of sustainable digitisation. Our study was governed by the research question: how do practitioners researching, working, or immersed in the broad area of sustainable digitisation (researchers, industry, NGOs, and policy representatives) understand and engage with quantifying carbon? While interviewees were working in different areas of sustainable digitisation, many participants were engaged in carbon accounting and/or used carbon accounting data in their practices, and it is these aspects of the interviews that are reported in this paper. Our findings show how these stakeholders struggled to measure carbon emissions across complex social and political systems, and the lack of standardisation to assist with this. As our participants tried to move towards a state of certainty associated with carbon accounting so that standards could be implemented, they were hindered by a range of social and political challenges such that uncertainty remained.
In the discussion, we argue that without engaging with these insights about carbon quantification, we have a greater tendency to think that current uncertainty in carbon accounting can be addressed with evermore accumulation of data. We emphasise the infeasibility of attaining certainty through this data collection and the need to accept uncertainty, take action despite it, and govern within it [12]. We consider insights from studies of uncertainty and propose an adaptive approach to uncertainty as a means of moving productively through the challenges experienced by our participants. Before we present our findings, we provide a brief overview of carbon quantification to contextualise our findings, before introducing our case study on digital technologies.

1. A Brief Overview of Quantifying Carbon

Carbon, emissions are typically classified according to the greenhouse gas protocol, which divides emissions into three ‘scopes’. Scope 1 is related to direct carbon emissions produced by an innovation, organisation, or sector. Scope 2 is associated with indirect emissions as those produced by third parties to provide the electricity that powers an innovation, organisation, or sector. Scope 3 relates to emissions resulting from the manufacture and production of a product (i.e., the ‘embodied emissions’), as well as other indirect emissions, such as commuting to work, end of life emissions etc. Within carbon accounting, different methodologies have been developed to assist with calculations. Life cycle assessments (LCAs) are one of the more notable for assessing scope 3 emissions, which are understandably the hardest to calculate given the range of associated uncertainties and lack of data. These measure carbon emissions across the life cycle of a product, organisation, or sector1, and are often called a cradle-to-grave analysis. LCAs require considerable time and judgment and are plagued by issues associated with the range of divergent methods used.

2. Digital Technologies

Digital Technologies (DTs)2 have an important role to play in mitigating climate change. For example, DTs are often viewed as a driver for reducing the carbon emissions of various sectors by providing information to reduce energy consumption [13,14,15]. DTs can also be used for analysis of very diverse datasets and for supporting climate related decision-making when dealing with highly complex environmental systems. The efficiencies DTs deliver promise to be harnessed across the economy to enable the continuance of a comfortable way of life as society downshifts carbon emissions. Despite this exciting potential, there is growing concern regarding the carbon emissions of DTs themselves. These arise during the manufacture of devices and device components, the use of DTs (e.g., the storage and processing of large amounts of data), and the disposal of hardware [16]. Global carbon emissions of DTs are, according to some experts, steadily rising and will continue to rise despite continual improvements in efficiency [16]. This is because, while likely improvements in energy efficiency and the move to renewable energy relieves at least some of these concerns, the pace of digital innovation could outpace the world’s renewable energy sources, leading to increases in carbon emissions when other sectors are decreasing their energy use [16,17]. Furthermore, the rebound effects of DT solutions mean that while increases in energy efficiency may be perceived to offer environmental advantages, they may likely lead to an increase in digital consumption [18,19,20]. As such, the sector is now facing pressure (like all other sectors) to find ways to reduce its emissions through carbon accounting.
This growing concern has led to a strong focus on the measurement of carbon emissions associated with DTs. A range of tools have been developed to help calculate the emissions related to DTs and many groups are producing guidance to help with the measurement of digital products, practices, and processes (for example, see [21]). At the same time, compared to other industries, there is still little regulatory pressure to quantify emissions in the digital sector because of an oftentimes lack of requirements for formal approvals for digital sector expansion compared to the need for other sectors to report on any potential environmental impacts that may be associated with their activities, for instance in the construction industry. Furthermore, because the digital sector is comprised of a complicated infrastructure of devices, technologies, systems, and networks, and digital devices are manufactured using complex processes using a vast number of different minerals, metals and other substances, carbon calculations are particularly difficult. While progress is being made in developing methodologies for calculating impacts, we are still seeing vast discrepancies and disagreement in published quantifications of digital technology/sector carbon emissions and uncertainties abound in attempting to measure net carbon impacts [16]. The DT sector is pushing for more data to address these uncertainties, however it is unclear how much certainty in these measurements is needed before action is taken to reduce the DT sector’s carbon emissions.

3. Methods

3.1. Interviews

Participants were identified from (a) the literature (see below) and (b) snowballing via key stakeholders in the field known to the authors as well as via other interviewees. Seventy-three individuals were asked to participate via email and 24 individuals accepted the invitation. Table 1 reports the self-reported sector of the interviewees. Interviewees were based in the UK and continental Europe, though also in continental North America (n=3) and Australia (n=1). Seventeen interviewees were male. Interviews were online or on the phone, and were broad, exploring participants roles and work practices, as well as perceived challenges and issues in the digital technology sector as they pertained to environmental sustainability.
Analysis of interview data was conducted using inductive thematic analysis [22]. GS and a research assistant independently read and re-read each interview transcript and noted down key themes on a memo. A meeting was held to discuss relevant themes and overlaps. GS and the research assistant then independently coded the data. GS then drew on this coded data to draw out key points for analysis. When interviewee’s perspectives were associated with their self-reported sector these have been reported in the findings. No other sector-specific perspectives were noted.

3.2. Identification of Interviewees from the Literature

Articles in Web of Science published between 2016-2021 were searched using four separate keyword string combinations that were developed deductively and inductively (appendix: Table 1). The combination of all the keyword strings returned 4,598 articles.
Titles and abstracts of articles were reviewed for relevance. Duplicate reviewing was applied initially to 300 articles to ensure consistency of approach. Inclusion criteria included articles exploring or discussing the environmental impacts of DTs. An exclusion criterion included articles that discussed environmental impacts, but those environmental impacts were not specifically associated to the digital aspects of the technology. A research assistant then applied the inclusion/exclusion criterion to the remaining articles. 489 articles remained after review. All publications (n = 489) were imported into VOSViewer.3 VOSViewer collects and organises aggregated articles and allows the construction of co-authorship relations’ networks of all contributing authors (forthcoming); it also allows the generation of lists of authors according to specified criteria. Lists were generated according to authors who were most represented in our sample in terms of number of publications (defined as having three or more publications), as well as those who were most cited (taking the top 20). Authors were invited to interview based on contact details provided in one or more of their publications. It was considered that this selection of authors would allow us to speak to those who hold the most knowledge and expertise in the field of digital sustainability (in terms of academic research) as well as those who are helping to shape any of the key debates.

3.3. Limitations

Fields that publish heavily in the journal literature, such as the sciences, are better covered in Web of Science than those that do not, such as philosophy. Nonetheless, Web of Science is one of the broadest academic databases covering a wide range of subjects. For interviews, our sample of 24 participants was limited by a lack of low-to-middle income country representation, and was predominantly male (which most likely reflects the gender gap in the field).

3.4. Ethics

This study received ethics approval from the Oxford University Central University Research Ethics Committee (CUREC): reference: R75723/RE001.

4. Findings

Early in our interviews, it became clear that interviewees had different views about how much the digital sector contributed to global emissions. While some interviewees worried about what they perceived to be the increasing growth curve of predicted digital technology energy use and the likely increases in data centre energy consumption over time (interviewee 4, business rep; interviewee 16, computer science), other interviewees argued that their own reported analyses suggested that such concerns were ‘disproportionate’, and that ‘the impacts [of the sector] are relatively small’ (interviewee 10, digital energy analyst). Interviewees pointed to a range of uncertainties and challenges associated with quantifying carbon emissions that they perceived explained these different views. These related to difficulties of gaining data about complex systems; the fragmentation of different knowledge communities in the field; and a lack of transparency and standards in the field. These challenges led to gaps in emissions data, incorrect assumptions about data; and personal decision-making about how to address these issues. These are described in more detail below.

4.1. How Do You Know What to Measure?

This difficulty is associated with either a lack of data all together or with a lack of updated data. For example, there is a lack of data associated with the manufacturing and transport of each component comprising a digital technology needed for accounting for embodied carbon emissions, which makes it challenging to conduct calculations (interviewee 18, industry). Several interviewees who had expertise with assessing the embodied emissions of specific digital devices discussed their own experiences with trying to source such data for their calculations. But even when the data are present it may not be usable because the speed with which hardware is often replaced in data centres means that calculations quickly become out-dated, especially with the time-lag of peer review publishing.
These issues were compounded by a perceived reluctance by industries to release propriety information about their carbon emissions: ‘it was impossible to get hold of any information on that [for their calculations related to data centres]….Even with people we know…it was just impossible’ (interviewee 11). These more political and economic issues around openness and transparency, which were tied to industry concerns about competition, public image, and trust, affected researchers’ abilities to make accurate predictions of carbon consumption: ‘with digital in general, I think the main thing is being transparent….digital trust and responsibility’ (interviewee 22, NGO). Interviewee 1, who worked at a large digital company, spoke about their often experience of only being provided with such information if they signed a contract that forbid them to publish the data externally: ‘if we ask our suppliers…they say, “Yeah, you can use the data, but you can never publish our data externally. You can aggregate it in a product…but you cannot sort of talk about our data”’. The unwillingness of data centres and/or other industries to disclose information about carbon emissions and/or the lack of information about what these emissions were, meant that there were many gaps in their datasets: ‘we are having really gaps in the calculations….sometimes we see the carbon footprint only for the use phase….because they have to go to their own suppliers and accounting is really difficult’ (interviewee 18, industry).
Moreover, our interviewees explained that this difficulty is especially the case when accounting involves assessing emissions across digital networks and within data centres because the data associated with a particular device such as a mobile phone is difficult to entangle from other data present in data centres. This is because data for specific purposes are not contained or constrained away from other data uses: ‘when you get into the network and the data centre, these are opaque systems…it’s not possible to detect what device is actually being used, or what devices are running, or what equipment is running on the network or in the data centre’ (interviewee 6). Disentangling the emissions associated to the use of one device or service from another is difficult because data centres are ‘opaque’ and entangled networks. The data are not just inaccessible, but are also hard to produce given the infrastructure.
Accounting for carbon emissions when using a device or service is only one stage of a comprehensive calculation which should also include emissions when manufacturing such device or underlying physical infrastructure. When calculating embodied emissions (Scope 3), one of the most pressing issues was the difficulty in making calculations with all the different data that would be needed if the manufacturing of each component constituting the device had to be considered. Interviewee 11, an academic researcher working with data centres, exemplified the issue using a pair of glasses. This interviewee described the various factors a researcher would need to think about when calculating embodied emissions of these glasses, explaining how this would be vastly more complex for digital technologies, which have infinitely more components:
when you start to break down where these pair of glasses come from in terms of materials, you end up normally with at least… 200 sources, of…raw materials… which have embodied impact in terms of extracting the minerals, or the raw material, transported and manufacturing in different parts of the world and comes here.
Again, as in the case of disentangling the emissions associated for the use of one device in a data centre, in the case of manufacturing, the issue pertains to the challenge of following the material infrastructures, discerning all its relevant parts and calculating the emissions for each one of them. Both the intricate nature of data centres’ network and the multiplicity of components resonate with the idea that these are complex systems.
Moreover, interviewees explained how the further upstream a researcher travels to assess embodied emissions, the harder it becomes to decipher how many of the upstream carbon emissions are associated with a specific downstream device. This is because each device component is one very small percentage of an upstream process that provided components to multiple devices (‘the further upstream you go the less determinate it becomes in terms of being attributable to electronics, so that’s part of the, the complication’ (interviewee 17, social scientist).
Even when embodied emissions were included, researchers would make different decisions about what emissions could/should be included in the assessment and interviewees pointed to how life cycle assessments could be conducted in many different ways. This was because decisions needed to be made about where to draw boundaries–the wider you go the more indeterminate and uncertain the figures, the narrower you go, the less chance of capturing all emissions in the calculations: ‘there’s a key difference in system boundaries. So, deciding what is going to be measured’ (interviewee 6); ‘where do you set the boundary?….where do you stop?’ (interviewee 3, science organisation). With different ways of conducting a life cycle assessment, and with little understanding about what the ‘correct’ [7] outcome should be, disagreements about the most appropriate approach to conducting an assessment were common. In one example, interviewee 19, a representative of a standard bearer in the sector, was concerned about too wide boundary drawing, which meant that researchers bring uncertain figures, and therefore assumptions, into their calculations:
people draw boundaries too big. You’re building boundaries into an area where you have no certainty. So I can make up ...I can give you a Scope 3 inventory number for an operation but I’ll tell you flat out it’s full of crap. There’s four or five categories where I can give you a good number and then there’s ten categories, right, I mean ... I’m making stuff up, I’m doing it intelligently but I’m using formulas and options and there’s huge uncertainty to it.
The uncertainty was associated with disagreements regarding the relevant data and the methods of calculating emissions which, as discussed below, vary depending on the different research community involved in the assessments.

4.2. Different Bodies of Research That Are Looking at Very Different Things

Interviewees recognised that because of the challenges associated with quantifying carbon emissions, in order to accurately conduct a carbon emissions analysis, understanding and expertise across the whole digital sector was required–not only across supply chains and devices, but also data and energy infrastructures and digital (IT, IoT) networks. Interviewee 8 remarked: ‘it’s not obvious how the systems behave and the experts are not necessarily experts in network technology’. Some participants–both academic researchers as well as those in industry–explained how they recognised this and that collaborations were part of their own everyday practices: ‘I collaborate with multiple people... from electrical engineering, people from mechanical engineering’ (interviewee 16, academic researcher, computer science); ‘we have a specific programme ongoing with 1,000 of our suppliers where we innovate together’ (interviewee 18, industry).
However, during interviews it unfolded that many actors in the digital sector were perceived not to be collaborating. Interviewees viewed the sector as comprised of a range of communities of knowledge generation, each trying to achieve the same goal but drawing on different disciplinary methods, literature and analyses: ‘different academics are approaching this in different ways...[..].....when people are using the literature, they are often choosing…different bodies of research that are looking at very different things’ (interviewee 6). This meant that each discipline was quantifying carbon in their own way and using their own processes, with disciplinary differences leading to different ways of viewing the numbers in carbon quantification. Interviewee 6 provides the example of ‘top-down’ and ‘bottom-up’ approaches, to exemplify how researchers’ different methodologies relied on different assumptions from a specific body of research knowledge, which ended up producing different energy consumption figures:
“How much energy does a data centre use?” You’re going to get estimates… maybe 4 or 500 terawatt hours of electricity…extreme estimates that say 2 or 3,000 terawatt hours..….There’s a lack of consistency in the methodologies…[..]..there’s a top-down number where you say, “Well, what is the total energy used by IT in a particular region? And how many people are there?” And then you just divide those two numbers….Or you can do what’s called a “bottom up approach” where you calculate the energy consumption of each individual piece of equipment, and….then... use that to calculate the energy intensity figure. And they often come to quite different results.
Similarly, interviewee 8, an academic in sustainability and computing, used the example of video streaming to explain how those in a different field approached quantification in different ways meaning that different answers were constructed. In the extract below this interviewee was frustrated with environmental assessment experts who they perceived to not be up to speed with the latest progress in carbon accounting methodologies for digital technologies:
if you’re an environmental assessment person you look at the network and you make assumptions of that….For digital, and we really only started to understand this in the last couple of years…we [use a different quantification model] and we began to realise how big the difference between the two were.
Interviewee 19 articulated the disciplinary power struggles at play between different research communities when quantifying carbon: because multiple realities can be produced from any set of numbers depending on the way in which numbers are analysed, each discipline tried to push their specific view of how best to quantify carbon emissions:
because everybody plays games, right, it’s all for discipline and presentation and I like to say that, “Give me a set of facts then I can create you multiple realities brought across the whole spectrum of approaches.”
Furthermore, interviewees stressed how these communities were fragmented and siloed from one and another: individuals in each community did not communicate in terms of their carbon quantification practices:
the sector has…evolved…this…silo [of] subsectors….mono-discipline culture is absolutely pervasive in the sector..[..].. it’s almost as though people have to learn how to talk to people from other disciplines, you know, find a common language (interviewee 7, academic researcher, design/sustainability);
one of the problems we have with this industry is that it’s very siloed, so there isn’t a good deal of cross fertilisation’ (interviewee 9, scientific lead at a data centre).
This fragmentation of communities was problematic, remarked interviewee 9, a scientific lead at a data centre, because different communities in the sector (e.g. data centre operators, IT experts, thermal engineers) were each working towards their own goals on joint initiatives while failing to take a more holistic ‘bigger picture approach’: ‘[when] you understand your own discipline, you don’t really understand the effects that you have on the other aspects of operation’. Though addressing this was perceived to be difficult in practice: ‘it’s almost as though people have to learn how to talk to people from other disciplines, you know, find a common language (interviewee 7, academic researcher, design/sustainability). This worried a number of interviewees who were concerned that this made it particularly difficult for researchers to draw on each other’s work across research communities in an appropriate way. The concern, explained interviewee 10, a digital energy analyst, was that when researchers from one community of knowledge draw on data from another, this leads to inaccuracies in modelling. This is because the underlying reliability and/or predictability of the data was black-boxed [23]. This meant that the researchers doing the modelling calculation incorporated this knowledge into their calculations without understanding the underlying methodological assumptions and uncertainties associated with the knowledge, inevitably leading to inaccurate assessments:
most internet infrastructure at the data centres, all the data networks, your router at home, they all operate with a very high fixed energy cost, so there is a fundamental misunderstanding by some of these researchers about how the equipment actually operates in reality (interviewee 10, digital energy analyst).
Misunderstandings across research communities, differences in methodologies, and lack of common language bring disagreements about accurate methods to assess carbon emissions. In the extract below, interviewee 11, an academic researcher working with data centres, describes their own attempts to assign metrics and models to a specific carbon emission problem, all the while realising that because of the heterogeneity of methodologies and metrics used in the field, their metrics will likely be met with more disagreement than agreement, with other researchers preferring to use their own methodologies and calculations:
there’s not an easy metric you can go back to….[..]..just using data from life cycle assessments, we tried to give it some different values here and there. But you know, you’ll probably find more people disagreeing with those values, than agreeing with them… and you couldn’t scientifically prove that that was, you know, the right answer, there would be a debate about each, every single one of those values.
As we see below, such disagreements among experts coupled with a lack of standards makes it hard to find a sense of direction in this field.

4.3. The Need for a Common Accepted Benchmark

Nearly all interviewees pointed to a lack of sector standards, which raised challenges concerning interoperability and frustrated interviewees because it meant that researchers were using different metrics in their calculations and comparison was difficult: ‘how it [an organisation] calculates its carbon footprint is possibly not identical to the way that [another one does]…..there’s no way of comparing one to another because they’re not operating across the same metrics’ (interviewee 4, business representative). The lack of standards, as well as the challenges and uncertainties inherent in calculations meant that even within the same research communities (as well as between) interviewees were making their own decisions about what to count and what to leave out of their carbon emission assessments. These differences mattered: without being able to compare different quantifying methods, interviewees explained that it was difficult to understand how well different organisations were responding to reducing their carbon emissions in comparison to others:
Microsoft will say that it is carbon neutral by a certain date and it’s aiming towards being carbon negative by another date. But…how it calculates its carbon footprint is possibly not identical to the way that Google actually measures its carbon footprint…..there’s, there’s no way of comparing one to another because they’re not operating across the same metrics.. (interviewee 4).
One interviewee described how their own company’s policies around carbon quantification might be different to those of other companies: ‘[this company’s] sustainability report, they do not report scope three. They consider scope three to be somebody else’s scope one and two problem’ (interviewee 13).
Interviewee 19 was concerned that a lack standards meant that double or triple counting could occur in a particular supply chain. They provided the example of a data centre accounting for their emissions through their electricity supply chain:
the data centre operator who is buying electricity from that grid region then takes their scope 1 emission and applies it to my operation. So now I’ve double counted it…. So then I’m now gonna go to my supplier…I say, “…I want to know what your emissions are,”.....So now I’ve counted that CO₂ a third time, right because it’s been counted by the utility, it’s been counted by my supplier and now it’s been counted by me.
The lack of standards were also perceived to be problematic because standards were viewed as a realisation of the ‘correct’ approach to carbon quantification – they moralised the way in which carbon accounting should be done and this was seen as something that was much needed in the sector: ‘lots of different…organisations [need to] say… “Okay, what, what is fair here?”…What does good look like?....What do we agree that good looks like?’ (interviewee 4). Standards also provided legitimacy for the carbon quantification approach taken. Without standards, and with companies making personal choices about what and how to account for emissions, there was an illegitimacy of carbon accounting and reporting. This meant that any findings lacked meaning outside of those who produced them: as interviewee 12 emphasised, numbers did not become ‘real’ unless they had legitimacy across the sector and the construction of knowledge only gained legitimacy when the knowledge was standardised:
these things become real when they go across a sector. You know, if you’ve got one company saying, “Well, hey, look. We’re assessed our own practices.” It’s like: “Yeah, okay.” You know, it doesn’t, it doesn’t mean anything unless it’s, unless it’s a kind of common accepted benchmark.
The lack of standards was perceived to be related to the newness of the field. Currently, different actors were pushing their own view and the field had not matured enough to choose a way of seeing and knowing the world. However, moving towards standards was viewed as tricky. Interviewee 11 described the difficulties with trying to reach consensus in the field because each group of actors were trying to push their own standards as best practice. Interviewee 22, an NGO representative, spoke about their attempts to bring different communities of knowledge together, with only limited progress. In the extract below, they described how they brought together digital technology sector participants from different knowledge communities, academia, and industry, to discuss how to standardise calculating carbon emissions for the sector. The meetings, they explained, quickly become dominated by only one or two experts in the specific area under discussion, with those who lack core-set expertise unable to contribute:
I’m working with…climate scientists, hardware engineers….we’re just not, a little bit not speaking the same language….We’ve started a working group.….I would say maybe only three people would be speaking where there’s 30 participants in that call. So it gets very technical, very fast … and then they’ll go into so much detail about the process of something going from A to B and then you know we have a coacher during the call and they’ll say, well any questions, any comments? No, because you, we don’t know what we don’t know. Right.

5. Discussion

Our findings show that collecting, agreeing, and acting on quantifiable data is not easy or straightforward. First, stakeholders deal with complex systems, in which it is difficult to disentangle data about emissions of single systems’ components. Not only are calculations difficult because data are unavailable, incomplete and/or outdated, but also because decisions have to be made on how to draw boundaries across these complex systems that define what aspects and components are considered as valuable and relevant and therefore included in the measurement. Second, the methods used to calculate emissions vary across different disciplines. This results in what we referred to as a tension among different research communities, meaning that there is a fragmentation across groups of stakeholders sharing the same methodological approaches and implicit assumptions about measuring emissions. These groups are sometimes unable to engage in fruitful conversations, and struggle to agree on standards or norms that can be used in comparative evaluations. Third, this absence of standards curbs positive action in the field and translates to a lack of shared meaning and sense of guidance for stakeholders, which, in turn, translates to a lack of actionable data. Our findings show a vicious circle from a lack of data that prevents reliable calculations of carbon emissions, that pass through divergent methodologies and approaches that prevent agreements on common standards, which leads to a lack of reliable data which, in turn, makes it difficult to calculate impacts and leads to a call for more data.
The findings can be understood in the context of scholarship that has illustrated how data do not exist objectively in the wild as truths waiting to be discovered, but as artefacts that are socially constructed as objects/subjects of knowledge (for example, see [8,9]). As described in the introduction, this scholarship has highlighted the complex social processes involved in materialising data, which are often messy. This brings uncertainty and complexity in the process of knowledge production, where personal decisions are often made about what to count and what not to. In the field of carbon quantification, for instance, scholars argue that representations of carbon emissions are constructed through these socio-technical processes, relationships, and interactions between actors, organisations, data, information, and policies [1,4,10,11]. The materialising of carbon quantification is therefore a complex process of framing, selecting, gathering, measuring, operationalising, negotiation and shaping. In line with this scholarship, our findings reveal that studies often start from different assumptions, and include individual value-based judgements in their data models, and vary in both scope (what data infrastructure is included in the calculations and what may be left out), and stage of the supply chain measured. The different quantification methods gave rise to different facts and then predictions about the overall carbon emissions attributed to the digital sector, and therefore different values associated with the urgency of the sector’s need to reduce carbon emissions [16].
Participants tried to manage this, believing that the imposition of standards could secure this legitimacy in specific methods. They understood that the production of a carbon emission number means little unless others use it, and standards were viewed as a way to achieving this. They also understood that standards are required to have a reliable basis for meaningful change because they facilitate a more collective approach to carbon quantification–the alternative is each community pulling in different directions. In this way, they viewed standards as being able to solve many of the issues and challenges they encountered with carbon quantification.
One way of doing this was to address the tensions among different actors and research communities. However, along with what has repeatedly been discussed in the literature (for example, see [24,25,26]), such a transdisciplinary and multi-sector approach was perceived difficult by our participants because of the fragmentation of different research communities, along with the tendency to black-box uncertainties associated with specific methodologies. With little communication between communities, each community struggled to understand the underlying assumptions baked into the data presented by each other. Furthermore, as our interview findings hinted at, and as the critical literature on standards has long argued (for example, see [27]), while standards are indeed a vital aspect of carbon quantification, standards are themselves social constructions of a particular reality and are deeply political: choosing a standard method is a socio-political process and dependent on which methods (and actors) gain the most prominence through the social and political processes of legitimisation. During standard-making, decisions are made about what gets counted and what does not. When any standard is implemented, these decisions become accepted, and when these processes and the values embedded within them become normalised and taken-for-granted in society, and as they become more long-standing and extensive, what was not chosen to be counted becomes invisible in society [10]. In doing so, standards render some aspects of carbon emissions invisible and/or irrelevant [28,29]–so much so that we forget to question why and how values comes to be expressed quantitively [29]. Furthermore, having standards may reinforce modernist assumptions that place faith in the ability to solve climate change challenges through managing carbon [4,5].
This is not to say that standards are not important–of course they are–but as the community drive calls towards standard making, we need to problematise the belief that more data are required so that we can reach a place of certainty before decisions about standards are made, as well as the belief that once we reach a place in which standards are made, issues associated with carbon calculations will have been resolved. We argue that perhaps a better way to view the issue is to recognise that uncertainties will always exist and trying to solve them before we take action may be untenable. This is not only because carbon emissions quantification is only based on the best numbers available meaning that they will never represent an objective and impartial description of reality, and ignoring this can lead to ‘fake precisionism’ [30], but also because it leads to an impasse. In terms of the latter, the community seems to have adopted a particular framing that dooms them to failure: if the aim has been interpreted as needing to arrive at precision and confidence, this will never be achieved. We need to reframe the aim of carbon accounting as one tool in helping us understand the ‘bigger picture’, such that it can act as an indicator for moving in the right direction and/or where to spend effort to reduce emissions. To do so, and to address the lack of agreement between the communities on what to include in the calculations, we can look to the literature on ‘deep uncertainty’ [31].

Deep Uncertainty

In the early 1990s, Funtowicz and Ravetz recognised that the emergence of complex scientific challenges created by dynamic environmental systems where “facts [are] uncertain, values in dispute, stakes high and decisions urgent”, was creating a new phase, or paradigm, for the use of science which they termed ‘post-normal’ [12,32,33] (p.138). The problem of carbon accounting is a classic example of this post-normal science. Funtowicz and Ravetz (p.7) claim that “[p]olicy-makers tend to expect straightforward information as inputs to the decision making process; they want their numbers to provide certainty”. Obviously, this is rarely possible, so uncertainty can provide an excuse for inaction, with Funtowicz and Ravetz (p.15) going on to suggest that “[p]rocrastination is as real a policy option as any other, and indeed one that is traditionally favoured in bureaucracies; and ‘inadequate information’ is the best excuse for delay” [12]. Within the context of climate change, for example, Lewandowsky et al. (p.1) state: “uncertainty as an argument to delay mitigative action” [34], with Oreskes and Conway (p.267) suggesting that this is because “we have an erroneous view of science. We think that science provides certainty, so if we lack certainty, we think the science must be faulty or incomplete” [35]. It is not possible for science to provide certainty, all data includes inherent or aleatory uncertainties, which cannot be reduced [36]. Epistemic uncertainties could be reduced with the collection of more data or use of new analysis techniques; however, these could also expose further uncertainties or areas of ignorance [37].
As our findings have shown, calculating carbon emissions are an exemplar of such deep uncertainty. This literature stresses that we need to manage these inherent uncertainties, rather than believe that such uncertainties can be addressed through standards following the collection of more data, and that standards will resolve all issues. This is not least because the fast pace of change in the DT context, as we saw in our findings, means that the development of a standard for calculation would anyway soon become out-dated.
Under extensive uncertainties as we have described, decisions are needed to move forward, i.e., decision making under deep uncertainty is required [38]. It is not our intention to suggest a specific method or combination of decision-making under deep uncertainty methods for this challenge because the focus on attention to precision quantification has meant that we have neglected the work needed to create methodologies for dealing more productively with uncertainty, and it is this work that we now need to drive forward. Rather, our aim is to highlight this as a direction that needs to be taken to overcome some of the deep uncertainties we have described that are inhibiting action, and in doing so draw attention to emerging approaches for making decisions when faced with complex systems, multiple actors and where standardisation is not feasible. Approaches for dealing with deep uncertainty, such as those described in Marchau et al can provide ways to develop an acceptable or ‘good enough’ way to produce an outcome [39], incorporating elements of adaptability and flexibility so that new knowledge can be included as it becomes available [40]. This could be the development of a simplified, but plausible, framework for carbon quantification using the data that is currently available, and logical assumptions. In some areas this is being done, such as the interviewee who was “making stuff up,” but “doing it intelligently”. The method can then be updated as data becomes available or new technologies are developed. There appears to be a group of people who are prepared to work together, and if they can reach a consensus on this, they could pave the way for others to follow. Crucial to whatever method is taken forward, developed standards must be designed to be flexible for change, with the limitations and assumptions within the standards, as well as excluded criteria, being made explicit. Bringing communities together helps with this, because it exposes underlying norms, areas of missing data and/or aspects that remain uncounted. The role of standards then become more than just providing an accepted method for counting, but also a way of maintaining uncounted elements as exposed, as well as any remaining uncertainties, so that we can move iteratively forward as we gain more data. In essence, standards must not be seen as ‘job done’, but as merely the beginning for addressing issues of deep uncertainty associated with carbon calculations in the DT sector.

References

  1. Gabrys, J., 2016. Practicing, materialising and contesting environmental data. Big Data & Society, 3, 2, 2053951716673391. [CrossRef]
  2. Whitington, J., 2016. Carbon as a Metric of the Human. PoLAR: Political and Legal Anthropology Review, 39, 1, 46-63. [CrossRef]
  3. Jasanoff, S., 2017. Virtual, visible, and actionable: Data assemblages and the sightlines of justice. Big Data & Society, 4, 2, 2053951717724477. [CrossRef]
  4. Peck, F. A., 2016. Carbon Chains: an elemental ethnography. Santa Cruz, California.
  5. Espinoza, M. I. and Aronczyk, M., 2021. Big data for climate action or climate action for big data? Big Data & Society, 8, 1, 2053951720982032. [CrossRef]
  6. Hoeyer, K., 2019. Data as promise: Reconfiguring Danish public health through personalized medicine. Social Studies of Science, 49, 4, 531-555. [CrossRef]
  7. Collins, H. M., 1983. The Sociology of Scientific Knowledge: Studies of Contemporary Science. Annual Review of Sociology, 9, 265-285. [CrossRef]
  8. Berman, E. P. and Hirschman, D., 2018. The Sociology of Quantification: Where Are We Now? Contemporary Sociology, 47, 3, 257-266. [CrossRef]
  9. Kovacic, Z., 2018. Conceptualizing Numbers at the Science–Policy Interface. Science, Technology, & Human Values, 43, 6, 1039-1065. [CrossRef]
  10. Vesty, G. M., Telgenkamp, A. and Roscoe, P. J., 2015. Creating numbers: carbon and capital investment. Accounting, Auditing & Accountability Journal, 28, 3, 302-324. [CrossRef]
  11. Lippert, I., 2015. Environment as datascape: Enacting emission realities in corporate carbon accounting. Geoforum, 66, 126-135. [CrossRef]
  12. Funtowicz, S. O. and Ravetz, J. R. Uncertainty and quality in science for policy.
  13. European Commission, 2019.
  14. Junge, A. L. and Straube, F., 2020. Sustainable supply chains – digital transformation technologies’ impact on the social and environmental dimension. Procedia Manufacturing, 43, 736-742. [CrossRef]
  15. Dauvergne, P. AI in the Wild. Sustainability in the Age of Artificial Intelligence. MIT Press.
  16. Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G. S. and Friday, A., 2021. The real climate and transformative impact of ICT: A critique of estimates, trends, and regulations. Patterns, 2, 9, 100340. [CrossRef]
  17. Blair, G. 2020. A tale of two citites: reflections on digital pollution. Oxford Univerity May 13th. [CrossRef]
  18. Alcott, B., 2005. Jevons’ paradox. Ecological Economics, 54, 1, 9-21. [CrossRef]
  19. Hilty, L. M., Köhler, A., Von Schéele, F., Zah, R. and Ruddy, T., 2006. Rebound effects of progress in information technology. Poiesis & Praxis, 4, 1, 19-38. [CrossRef]
  20. Takahashi, K. I., Tatemichi, H., Tanaka, T., Nishi, S. and Kunioka, T. 2004. Environmental impact of information and communication technologies including rebound effects. IEEE International Symposium on Electronics and the Environment. Conference Record.10-13 May.
  21. Smith, M., Knowles, B., Widdicks, K., Blair, G., Samuel, G., Jirotka, M., Lucivero, F., Ten Holter, C. and Sommavilla, L. 2024. Greater than the sum of its parts: exploring a systemic design inspired responsible innovation framework for addressing ICT carbon emissions. Proceedings of Relating Systems Thinking and Design 12.
  22. Braun, V. and Clarke, V. Thematic Analysis: a practical guide. SAGE Publications Ltd.
  23. MacKenzie, D. 1998. The Certainty Trough. Palgrave Macmillan.
  24. Gordon, L. R., 2014. Africa Development. Disciplinary decadence and the decolonisation of knowledge, 39, 1.
  25. Scharff, R. C. and Stone, D. A., 2022. Transdisciplinarity Without Method: On Being Interdisciplinary in a Technoscientific World. Human Studies. [CrossRef]
  26. Lavorgna, A. 2020. Epistemologies of cyberspace: notes for interdisciplinary research. Palgrave Macmillan.
  27. Timmermans, S. and Epstein, S., 2010. A world of standards but not a standard world: Toward a sociology of standards and standardization. Annual review of Sociology, 36, 69-89. [CrossRef]
  28. Lohmann, L., 2009. Toward a different debate in environmental accounting: The cases of carbon and cost–benefit. Accounting, Organizations and Society, 34, 3, 499-534. [CrossRef]
  29. Espeland, W. N. and Stevens, M. L., 1998. Commensuration as a Social Process. Annual Review of Sociology, 24, 1, 313-343. [CrossRef]
  30. Power, M., 2004. Counting, Control and Calculation: Reflections on Measuring and Management. Human Relations, 57, 6, 765-783. [CrossRef]
  31. Lempert, R. J., Popper, S. W. and Bankes, S. C., 2003. Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis, Santa Monica, California.
  32. Funtowicz, S. O. and Ravetz, J. R. 1991. A New Scientific Methodology for Global Environmental Issues. Columbia University Press.
  33. Funtowicz, S. O. and Ravetz, J. R., 1993. Science for the post-normal age. Futures, 25, 7, 739-755. [CrossRef]
  34. Lewandowsky, S., Ballard, T. and Pancost, R. D., 2015. Uncertainty as knowledge. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373, 2055, 20140462. [CrossRef]
  35. Oreskes, N. and Conway, E. M. Merchants of doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury, London.
  36. van Asselt, M. B. A. and Rotmans, J., 2002. Uncertainty in Integrated Assessment Modelling. Climatic Change, 54, 1, 75-105. [CrossRef]
  37. Skinner, D. J. C., Rocks, S. A. and Pollard, S. J. T., 2014. A review of uncertainty in environmental risk: characterising potential natures, locations and levels. Journal of Risk Research, 17, 2, 195-219. [CrossRef]
  38. Marchau, V. A., Walker, W. E., Bloemen, P. J. and Popper, S. W. Decision making under deep uncertainty: from theory to practice.
  39. Ben-Haim, Y. 2019. Info-Gap Decision Theory (IG). Springer.
  40. Walker, W. E., Marchau, V. A. and Kwakkel, J. H. 2019. Dynamic Adaptive Planning (DAP). Springer.
1
Many LCAs assess many environmental impacts associated with a product’s, organisations, or sector’s lifecycle, but here we focus on carbon emissions.
2
Digital technologies (DTs) allow for the datafication of things; they gather, store and process data for various uses, including, machine learning technologies and other artificial intelligence (AI) algorithms. Examples of DTs include data centres, information and communication technologies (ICT), the internet of things (IoT), and digital infrastructures and devices.
3
Table 1. Self-reported sector of interviewees. Some interviewees had multiple roles.
Table 1. Self-reported sector of interviewees. Some interviewees had multiple roles.
Self-reported sector of interviewee Number of individuals interviewed
Academic researchers (computer scientists, sustainability experts, social scientists, engineers, societies)
[interviewees 1, 2, 6, 7, 8, 11, 16, 17, 23, 24]
10
Industry (commercial, corporate, spin-offs, directors, researchers, alliances, organisations)
[interviewees 1, 2, 4, 13, 15, 18, 20, 21]
8
Data centre representative and/or consultant or involved with the sector’s markets
[interviewees 2, 9, 10, 13, 21]
5
Policymaker/consultant (funding bodies, organisations associated with standards)
[interviewees 3, 5, 12, 14, 19]
5
NGO
[interviewee 22]
1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated