Can AI Agents Be Sustainable? What Sasha Luccioni's Research Reveals About Green AI
Here is the question every sustainability leader needs to answer in 2026: can AI agents actually be sustainable? Sasha Luccioni, speaking at AI Festival 2026, gives the clearest answer available: AI footprint depends on the models chosen and how they are used. That is the key insight that cuts through the debate. It is not whether AI is sustainable or unsustainable in the abstract. It is whether you make sustainable choices in deploying it.
CodeCarbon makes the measurement step accessible. Making energy consumption visible is the first step toward reducing it, and CodeCarbon specifically encourages more responsible use by individuals and organizations that can see what their choices cost. Beetroot is doing the work on the other side of the ledger: organizations measuring and managing their carbon footprint with AI, proving that the technology can be applied to environmental management as well as generating environmental cost.
This blog is the practical guide to sustainable AI deployment. The green AI framework, which models to choose, how to measure, and how to actually reduce AI's environmental footprint while getting its capability benefits.
Sasha Luccioni's Core Insight — Two Leverage Points for Reducing AI's Footprint
Luccioni's research establishes a framework with two leverage points. AI footprint depends on models chosen, and AI footprint depends on how those models are used. The same AI task can have dramatically different environmental costs depending on the choices made at both levels.
At the model level, the difference between a smaller efficient model and a frontier model can be 10 to 100 times in energy consumption for equivalent task completion. Smaller models can handle most enterprise tasks at a fraction of the energy cost of frontier models. Using GPT-5 or Claude Opus for simple Q&A that a much smaller model could handle is environmental waste that compounds at scale.
At the usage level, the patterns of how AI is deployed matter enormously. High-volume simple tasks run through large models is the most wasteful configuration possible. Batching requests, caching responses, using asynchronous processing for non-urgent tasks, running compute-heavy workloads when renewable energy is available — these are all usage pattern choices that reduce footprint without reducing capability.
Luccioni's framing: making energy consumption visible is the first step to reducing it. When developers and sustainability teams can see the energy cost of their AI choices, they make better ones. Carbon tracking should be part of AI development and deployment governance, not an afterthought.
The practical implication is that the model choice is often made by engineers without sustainability input. Sustainability leaders need enough understanding of AI footprint to participate in that decision. Usage patterns are often set by default in tooling without explicit optimization for environmental cost. Both of those gaps are fixable.
The Green AI Framework — Five Steps to Sustainable AI Deployment
Sasha Luccioni's research and the CodeCarbon methodology combine into a practical framework that organizations can implement.
Step 1: Measure Before You Optimize
Use CodeCarbon or equivalent to measure AI energy consumption across your deployments. Track energy per AI interaction, total AI energy consumption, and carbon per AI interaction. Establish baselines before implementing optimizations. Without baseline measurement, you cannot demonstrate improvement. Luccioni's research is clear that visibility is the prerequisite for reduction.
CodeCarbon estimates energy consumption from AI model runs and converts to carbon equivalents. It supports multiple frameworks and cloud providers. It is free and accessible to any organization running AI workloads. The measurement infrastructure investment is low. The insight value is high.
Step 2: Right-Size Model Selection
Match model capability to task complexity. Do not use frontier models for tasks that a smaller model could handle. GPT-4o mini and Claude Haiku can handle the majority of enterprise tasks at a fraction of the energy cost of GPT-5 or Claude Opus. Reserve frontier models for tasks that genuinely require complex reasoning, multi-step analysis, or capabilities that only the frontier models provide.
Luccioni's research confirms that model selection is the biggest lever for reducing AI footprint. A single model downgrade from frontier to efficient for a high-volume task can reduce energy consumption by an order of magnitude while maintaining task quality. This is not a marginal improvement. It is a structural change in your AI environmental cost.
Evaluate every AI use case and ask: does this genuinely require a frontier model? If the answer is no, use a smaller, more efficient model. Make this a governance question, not just an engineering default.
Step 3: Optimize Usage Patterns
Batch AI requests where possible instead of processing everything in real time. Cache AI responses for repeated queries rather than recomputing the same output. Use asynchronous processing for non-urgent AI tasks, and where possible, schedule heavy compute for times when renewable energy is more available on the grid. These are software architecture and workflow decisions that reduce environmental cost without reducing capability.
Luccioni: how you use the model matters as much as which model you choose. The combination of right-sizing model selection and optimizing usage patterns can reduce AI footprint by 90% or more for many enterprise use cases while maintaining equivalent output quality.
Step 4: Choose Providers with Strong Environmental Commitments
Microsoft Azure: carbon negative by 2030, 100% renewable energy by 2025. Google Cloud: carbon neutral since 2007, working toward 24/7 carbon-free energy by 2030. AWS: 100% renewable energy by 2025. The cloud provider you choose affects the carbon footprint of your AI workloads regardless of which models you run or how you use them.
Ask your cloud providers about their water usage effectiveness and data center locations. Some facilities are significantly more water-efficient than others. Provider selection is a leverage point that sustainability teams can engage on directly.
Step 5: Set AI Sustainability Targets
Treat AI energy consumption like any other sustainability metric. Set targets for reducing AI carbon per interaction. Include AI environmental footprint in your ESG reporting. Make AI sustainability part of your AI governance framework.
The organizations that treat AI environmental impact as a first-class sustainability concern, measured and targeted like any other environmental metric, will be ahead as disclosure requirements expand.
The Conservation Opportunity — AI for Environmental Management
Beetroot is doing the work that demonstrates AI's potential on the benefit side of the ledger. Organizations measuring and managing carbon footprint with AI are proving that the technology has applications in environmental management, not just environmental cost.
AI can optimize logistics routing, building HVAC systems, agricultural inputs, and manufacturing processes. The emissions reductions in those sectors can exceed AI's own footprint. That is the path to net-positive AI: minimize AI's own footprint through the green AI framework, maximize AI's environmental benefit by deploying it for conservation and optimization work.
Inno-Thought's framing is the right one: AI can cut global emissions, but only if developed sustainably. The organizations achieving net-positive AI are doing both things simultaneously. They are managing AI's own footprint through deliberate model selection and usage patterns, and they are deploying AI to cut emissions elsewhere at scale.
The sustainable AI requirement is not abstract. It is operational. Measure footprint. Right-size models. Optimize usage. Choose green providers. Set targets. Apply AI to environmental management. That is the complete picture of what sustainable AI deployment looks like in practice.
Start Measuring Today
CodeCarbon is free. The green AI framework is implementable today for any organization running AI workloads. The measurement prerequisite is the only real barrier to entry. Without measurement, you cannot set targets. Without targets, you cannot demonstrate improvement. Without visibility, you cannot make better choices.
The organizations that start measuring AI energy consumption now will have baselines, targets, and improvement data by the time disclosure requirements expand. The organizations that do not measure will be building that infrastructure under regulatory pressure with no historical context.
AI can be sustainable. Sasha Luccioni's research proves it depends on deliberate choices. The question is whether your organization is making those choices deliberately or by default.