May 30, 2024

AI: between opportunities and dangers, how to reconcile technology and sobriety?

AI: between opportunities and dangers, how to reconcile technology and sobriety?

Artificial intelligence (AI) has become ubiquitous in our daily lives, and a key driver of innovation for businesses. While it provides pragmatic and promising solutions to crucial issues such as health, safety and the ecological transition, which are driving us to explore its applications ever further, it is imperative to beware of the risks and the danger of the rebound effect inherent in its intensive use!


Escande, economic editorialist at Le Monde, highlights the difficulties of reconciling technology and sobriety in his article "Artificial Intelligence: The rebound effect trap": "Massive investments, such as those by Microsoft, have led to a significant increase in CO2 emissions. Between 2023 and 2024, the company invested $50 billion in new data centers, resulting in a 30% increase in carbon emissions compared to 2020."

Among other things, Chat GPT 3's training phase is said to have used around 700,000 liters of water and 150 tons of CO2 - the equivalent of 150 to 200 round-trips between Paris and New York!
These figures jeopardize global climate objectives, such as those set out in the Paris Agreement.

This rebound effect, already anticipated by British economist Stanley Jevons in the 80s, confirms his theory that the more energy-saving a technology enables, the more total consumption rises due to the increase in uses and markets. 

In the context of AI, although the efficiency gains are promising, the development of this technology is expanding demand, leading to risks and additional pollution.

Questions of security and ethics are also crucial. The use of sensitive data can lead to issues of confidentiality, transparency and security, requiring robust measures to prevent information leaks and biases in the decisions made by AI.

Opportunities not to be overlooked

Despite this, AI enables considerable time savings by automating repetitive tasks, freeing up human resources for more strategic tasks. It also improves the accuracy and reliability of analyses, such as those of ESG (environmental, social and governance) data, enabling more informed decision-making. Finally, by optimizing processes and decisions, AI contributes to greater energy efficiency.
For example, it can optimize the management of electricity grids and predict physical and transitional risks linked to climate change, or enable the detection of invasive plant species...

At Axionable, we have chosen to focus precisely on this last theme, using AI to help our customers accelerate their sustainable transition, to measure and reduce their carbon footprint, predict physical risks or optimize responses to biodiversity-related issues.

So how can we overcome the AI paradox and use it effectively and responsibly?

Faced with these risks, regulation and incentives are a great first step: since 2021, the AI Act has proposed a graduated scale of risks to frame the use of AI. 

In addition, emerging certifications, such as that created by LNE (Laboratoire National de Métrologie et d'Essais) or Labelia Labs, guarantee the performance, safety and ethics of the AI systems audited.

As for the emissions generated, a number of emerging innovations could help reduce this pollution, but at what price? The risk remains that these future improvements will merely mitigate negative impacts, without eliminating them.

Breaking out of this climate paradox is a considerable challenge for our modern societies, requiring a reconciliation between technology and sobriety.

Designing and using AI solutions brings with it a responsibility

At Axionable, as AI players, remembering that we have a collective responsibility to impose rigorous management and promote thoughtful and ethical use of this technology is essential to ensure a sustainable future.
This includes popularizing and understanding its impacts, promoting sober usage and developing more sustainable and resilient low-tech technologies.
By adopting sustainable practices and establishing effective safeguards, we can mitigate AI's negative impacts and maximize its benefits for the common good.

In the first instance, it seems essential to us that an AI solution should only be implemented if, and only if, it responds positively to the following two issues:
- Targeting sustainable use cases
- Responding to the main principles of responsible/trusted AI

What are we waiting for, then, to adopt a conscious and responsible approach to optimizing the benefits these technologies can offer us for the common good?

Read on

Receive our newsletter

Thank you, and see you soon in your mailbox!
Oops! Something went wrong...

By submitting this form, you agree to receive occasional emails from us. No spam, just our news and invitations to our events. You can unsubscribe at any time.