Artificial intelligence (AI) has become ubiquitous in our daily lives, and a key driver of innovation for businesses. While it provides pragmatic and promising solutions to crucial issues such as health, safety and the ecological transition, which are driving us to explore its applications ever further, it is imperative to beware of the risks and the danger of the rebound effect inherent in its intensive use!
PhilippeEscande, economic editorialist at Le Monde, highlights the difficulties of reconciling technology and sobriety in his article "Artificial Intelligence: The rebound effect trap": "Massive investments, such as those by Microsoft, have led to a significant increase in CO2 emissions. Between 2023 and 2024, the company invested $50 billion in new data centers, resulting in a 30% increase in carbon emissions compared to 2020."
Among other things, Chat GPT 3's training phase is said to have used around 700,000 liters of water and 150 tons of CO2 - the equivalent of 150 to 200 round-trips between Paris and New York!
These figures jeopardize global climate objectives, such as those set out in the Paris Agreement.
This rebound effect, already anticipated by British economist Stanley Jevons in the 80s, confirms his theory that the more energy-saving a technology enables, the more total consumption rises due to the increase in uses and markets.
In the context of AI, although the efficiency gains are promising, the development of this technology is expanding demand, leading to risks and additional pollution.
Questions of security and ethics are also crucial. The use of sensitive data can lead to issues of confidentiality, transparency and security, requiring robust measures to prevent information leaks and biases in the decisions made by AI.
Despite this, AI enables considerable time savings by automating repetitive tasks, freeing up human resources for more strategic tasks. It also improves the accuracy and reliability of analyses, such as those of ESG (environmental, social and governance) data, enabling more informed decision-making. Finally, by optimizing processes and decisions, AI contributes to greater energy efficiency.
For example, it can optimize the management of electricity grids and predict physical and transitional risks linked to climate change, or enable the detection of invasive plant species...
At Axionable, we have chosen to focus precisely on this last theme, using AI to help our customers accelerate their sustainable transition, to measure and reduce their carbon footprint, predict physical risks or optimize responses to biodiversity-related issues.
Faced with these risks, regulation and incentives are a great first step: since 2021, the AI Act has proposed a graduated scale of risks to frame the use of AI.
In addition, emerging certifications, such as that created by LNE (Laboratoire National de Métrologie et d'Essais) or Labelia Labs, guarantee the performance, safety and ethics of the AI systems audited.
As for the emissions generated, a number of emerging innovations could help reduce this pollution, but at what price? The risk remains that these future improvements will merely mitigate negative impacts, without eliminating them.
Breaking out of this climate paradox is a considerable challenge for our modern societies, requiring a reconciliation between technology and sobriety.
At Axionable, as AI players, remembering that we have a collective responsibility to impose rigorous management and promote thoughtful and ethical use of this technology is essential to ensure a sustainable future.
This includes popularizing and understanding its impacts, promoting sober usage and developing more sustainable and resilient low-tech technologies.
By adopting sustainable practices and establishing effective safeguards, we can mitigate AI's negative impacts and maximize its benefits for the common good.
In the first instance, it seems essential to us that an AI solution should only be implemented if, and only if, it responds positively to the following two issues:
- Targeting sustainable use cases
- Responding to the main principles of responsible/trusted AI
What are we waiting for, then, to adopt a conscious and responsible approach to optimizing the benefits these technologies can offer us for the common good?
On October 17, 2024, the IMPACT 40/120 prizewinners reached a key milestone in their journey, joining the 3rd promotion of Impact Lab Acceleration. For five months, these impact structures benefited from tailor-made support to take up a major challenge: to become the impact unicorns of tomorrow.