I use AI. I benefit from it. I’m even doing a study on it - using…AI. But what are the broader consequences of AI? Am I complicit, and if so, what am I complicit in?
AI’s Environmental Impact
Energy Consumption: AI data centers are projected to consume 224 terawatt-hours (TWh) of electricity in 2025, accounting for 5.2% of total U.S. power demand. This figure is expected to rise to 606 TWh by 2030, representing 11.7% of total U.S. power demand.1
Carbon Emissions: Training large AI models can consume substantial energy and generate significant carbon emissions. For example, estimates suggest that training a model at the scale of GPT-3 can emit hundreds of metric tons of CO₂, depending on the hardware, training duration, and energy source—comparable to the annual emissions of dozens of gasoline-powered vehicles. 2
Water Usage: Training large AI models carries a significant water footprint. According to Li et al. (2023), training GPT-3 may have consumed approximately 700,000 liters of freshwater—primarily for cooling data center servers. That’s roughly equivalent to the amount of water used to manufacture 320 electric vehicles. 3
AI’s Social Impact
Job Displacement: According to Eloundou et al. (2023), large language models like GPT-4 could significantly impact the labor market, with up to 19% of U.S. workers potentially seeing at least 50% of their tasks affected. Entry-level white-collar jobs—particularly those involving routine cognitive tasks—are among the most exposed.4
Bias and Discrimination: AI systems can replicate and even amplify existing societal biases, especially when trained on historical data that reflect structural inequities. Without meaningful ethical oversight, these algorithms risk entrenching patterns of exclusion and discrimination—disproportionately affecting marginalized communities.5
Economic Inequality: Generative AI has the potential to widen the racial economic gap in the United States by $43 billion annually, unless deployed thoughtfully to remove barriers to economic mobility. 6
Epistemic Impact and Narrative Domination: Perhaps the most dangerous consequence of AI isn’t job loss or bias—it’s framing. Language models don’t just reflect culture; they produce it at scale and with speed, plausible tone, and trained coherence. When meaning itself becomes manufactured, curated, and optimized for engagement, the ability to think freely degrades. Not immediately. But incrementally. Subtly. Systematically.
Germani and Spitale (2025) found that large language models systematically rate information differently depending on how it is framed—particularly by who is cited as the source. For example, when identical statements were attributed to Chinese individuals instead of Americans, the models rated them lower in credibility, safety, and helpfulness.7
Reflection
Using AI isn’t inherently wrong, but I would argue that using it without awareness—without facing its environmental, social, and structural consequences—is.
I use AI regularly. I benefit from it. I’m even studying it—using AI itself as part of that process. But that doesn’t make me exempt, so now what? I Know that the tools I use aren’t neutral. I don’t get to hide behind complexity or scale.
So I’m asking: What exactly am I complicit in? What choices am I making, and what tradeoffs do I accept? Am I contributing to extraction, or am I actively working to reframe how these tools are developed, deployed, and understood?
That’s the frame I’m holding as I continue this work.
This video shares more about how I’m integrating AI into my practice, my research, and my thinking—without looking away from what it costs.
How About You?
Are you using AI? If so, how are you feeling about all of this? Your thoughtful comments and discussion is most welcome!
I Referenced a Video
Here’s the video with Steven Pinker I reference. 8
References
McKinsey & Company. (2024, April). AI’s power binge. McKinsey & Company. https://www.mckinsey.com/featured-insights/sustainable-inclusive-growth/charts/ais-power-binge
Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., So, D. R., Texier, M., & Dean, J. (2021). Carbon emissions and large neural network training [arXiv preprint arXiv:2104.10350]. https://arxiv.org/abs/2104.10350
Li, S., Ren, Y., Zhang, H., Zhang, Y., & Xu, Y. (2023). Making AI less thirsty: Uncovering and addressing the secret water footprint of AI models [arXiv preprint arXiv:2304.03271]. https://arxiv.org/abs/2304.03271
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. https://arxiv.org/abs/2303.10130
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. https://fairmlbook.org
McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai
Germani, F., & Spitale, M. (2025). Framing bias in large language models [Preprint]. arXiv. https://arxiv.org/abs/2505.13488
MIT Center for Constructive Communication. (2024, March 4). Fireside Chat: Sam Altman and Deb Roy [Video]. YouTube.
Share this post