Artificial intelligence (AI) is one of the most widely discussed technologies today. And for good reason. A leap in technology at this scale has yet to be seen before, with users now sending over 1 billion messages to ChatGPT daily.
But most conversations about AI flip-flop between frantically optimistic hype to fear-based paranoia. Few seem to explore the broader implications of AI while providing solutions for a healthy path forward.
Yet in a recent episode of the Get Discovered podcast, host Joe Walsh sits down with Fabio de Almeida, PhD candidate and Head of Design at Dom Rock. Together, the two quickly move past the AI hype to discuss its sociocultural impacts, the darker underbelly, and how we—the general public—can better navigate this complex tech together.
Watch the full episode below, or read on for a detailed summary.
The Darker Environmental Harms of AI Systems
Pulling insights from academia and AI thought leaders, Joe and Fabio explore the sociocultural ramifications of AI on our world today. A central one is the serious environmental concerns that AI systems, particularly Large Language Models, need to operate.
Fabio describes how data centres in countries like Chile are consuming natural resources, both exacerbating climate change and harming local communities on the ground. For example, according to Food and Water Watch, AI server and data centre energy demand may triple in the next five years. And by 2028, AI in the US could require as much as 720 billion gallons of water annually just to cool AI servers.
And EcoBusiness reports that a single AI data centre consumes as much electricity as “100,000 households and by 2030, their overall energy consumption in a year could be slightly higher than the total of Japan’s current annual consumption, according to the latest report by the International Energy Agency (IEA).”
Ultimately, this environmental impact contradicts the promises AI evangelists often make about solving global crises like climate change. Instead, it’s leading to problematic environmental concerns for local communities and the world at large.
The Cultural Imbalance of AI’s Training Data
Another social topic covered in this conversation is AI’s limitations, particularly when it comes to its dependence on data. Fabio emphasizes that many AI systems are trained on data sets that predominantly reflect Western, Global North perspectives, leading to biases and a lack of representation for the Global South. This issue isn’t isolated to cultural data; facial recognition algorithms also show troubling biases, performing poorly on non-white faces due to imbalanced training data.
Without awareness of the limitations of AI’s training data, this can further embed already existing socio-cultural biases.
How Do AI Operations Affect Labour?
Fabio also references the book Atlas of AI by Kate Crawford, who uncovers the vast, low-wage human labour that underpins AI. This includes the workers who mine the raw materials, the “ghost workers” who perform the tedious and often psychologically taxing tasks of labelling and cleaning data, and the factory workers who assemble hardware. Crawford highlights how these workers, many in the Global South, are made invisible by the AI industry.
Will AI Impact True Creativity?
Artificial intelligence is also transforming the creative landscape, but not without costs to artists. As AI systems trained on countless artworks (often without permission or compensation) flood markets with algorithmically generated content, many artists will suffer. Fabio states that although some argue AI might ultimately enhance human creativity by handling routine tasks or inspiring new artistic movements, the immediate reality for many working artists is quite the opposite. Their intellectual property fuels the very systems threatening their livelihoods. Meanwhile, instead of reducing workload, AI can sometimes lead to a reorganization of labour that doesn’t necessarily result in more free time, but rather compels people to produce more in less time. Fabio reminds the audience to question our desire for efficiency at all costs, and whether we’re using our newfound ‘free time’ for creativity or not.
How Flashy Marketing Affected the AI Industry
Even the term “artificial intelligence” itself was a strategic choice, Joe notes, meant to attract attention and funding. As AI develops, marketing strategies continue to shape our perceptions—often highlighting the magical or artistic capabilities of AI, such as generating movies or artworks, which can draw excitement but also create unrealistic expectations about its capabilities. But it’s important to be mindful of the ‘shiny object’ veil that AI brings and who’s behind the systems that are building these models, right down to the terminology itself.
Responsible AI Development: How We Can Push for an Ethical Path Forward
Every model, dataset, and deployment choice leaves a footprint. The goal is to make that footprint visible, measurable, and manageable. Based on Fabio’s insights, here are six core pillars of responsible AI development.
1. Data Transparency and Governance
Data is the foundation of any AI system — and often its biggest source of ethical and performance risk.
Responsible teams treat data provenance with the same rigour as code management. This could include:
- Document dataset origins. Track where data comes from, who created it, and under what license or consent terms.
- Validate representativeness. Include diverse data sources to reflect the range of users and contexts your product serves.
- Enforce governance policies. Maintain data lineage logs, flag restricted content categories, and respect creator opt-outs.
- Enable auditability. Store dataset metadata alongside versioned model releases for full traceability.
Transparent data pipelines reduce downstream bias and make it possible to explain AI decisions to users, partners, or regulators.
2. Sustainable Model Design
Large models can deliver impressive results, but at significant environmental and computational cost.
Responsible AI favours efficiency over excess, right-sizing models for specific use cases. This can include:
- Select domain-specific models instead of general-purpose LLMs when scope is narrow.
- Use on-device inference or hybrid edge/cloud setups to minimize energy use and latency.
- Cache deterministic steps to reduce redundant API calls or retraining.
- Monitor inference workloads and power draw to identify optimization opportunities.
3. Bias and Representation Audits
AI systems don’t just process information: they shape perception. Bias audits ensure those systems don’t amplify inequity.
- Create representative test sets that reflect your audience’s diversity in geography, culture, and language.
- Measure model behavior across demographic segments to identify disparate outcomes.
- Benchmark outputs over time to detect drift or regression.
- Define measurable acceptance criteria for fairness, accuracy, and relevance.
Responsible AI development treats fairness as an engineering requirement, not an afterthought, Fabio suggests.
4. Environmental Accounting
AI’s infrastructure has a tangible ecological cost. Sustainability should be integrated into every design and procurement decision. Some ideas include:
- Requesting energy and water metrics from model and cloud providers.
- Tracking emissions per training run and per inference task.
- Deploying workloads in renewable-powered regions or carbon-neutral data centers.
- Implementing lifecycle accounting to capture total environmental impact, from training through end-of-life.
Embedding environmental metrics in performance reviews ensures sustainability is measurable, not aspirational.
5. Ethical Labor Practices
AI systems depend on extensive human effort: from data labelling to content moderation and model evaluation. Responsible AI means valuing that work transparently and fairly. Ideas include:
- Partnering only with vendors who publish labor standards and pay fair, living wages.
- Auditing labor conditions within labeling and moderation supply chains.
- Crediting human contributors where their input meaningfully shaped outcomes.
- Maintaining the human-in-the-loop review for any model that affects users’ rights, opportunities, or well-being.
Acknowledging the human layer behind AI ensures equity and accountability stay central to development.
6. Purpose-Driven AI Deployment
Responsible AI prioritizes purpose and clarity over novelty. Questions to consider when evaluating purpose-driven AI deployment:
- Does this AI system measurably improve performance, accessibility, or discovery?
- Can the same outcome be achieved with simpler or non-AI solutions?
- Are users clearly informed when they’re interacting with AI?
- Do fallback mechanisms exist when confidence or reliability is low?
Applying AI only where it creates measurable, user-facing value prevents technical debt, over-engineering, and wasted energy.
An AI-Literate, Well-Informed Public as the Best Path Forward
Ultimately, AI literacy is not just about understanding the algorithms, prompt engineering, or AI’s origins. It’s about comprehending the broader social, economic, and environmental impacts, as well as the potentially harmful ramifications of how our data may be used in the future. This episode with Fabio is a refreshing, stark reminder to critically evaluate AI systems, question biases inherent in AI’s training data, and be mindful of what personal data we choose to feed AI systems. As AI continues to evolve, it’s crucial to foster a well-informed public that can navigate AI’s duality, balancing its opportunities with its limitations. This podcast episode serves as a pivotal resource, shedding light on AI’s real impacts and encouraging a broader societal conversation on its future.
Tune into the full episode to hear the gist of it. Listen on Spotify, Apple Podcasts, YouTube, or wherever you like to listen.
Episode References
- Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao
- Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
- The Cost of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism by Nick Couldry and Ulises A. Mejias
- The Eye of the Master: A Social History of Artificial Intelligence by Matteo Pasquinelli
About the Get Discovered Podcast
Get Discovered is a podcast from Prerender.io on AI, SEO, and online discoverability. We speak with business leaders, SEOs, and AI experts on how AI is impacting our world—and what you can do to keep up.