Go back

Open Source Isn't Open Access and the Hidden Infrastructure Tax Killing LLM Adoption in the Global South

Authors:

Felix Kim & Redrob Research Labs

Date:

Executive Summary


Over the past two years, open-source large language models have been widely celebrated as the key to democratizing artificial intelligence. When organizations such as Meta and Mistral released model weights publicly, the prevailing narrative was that advanced AI had finally become accessible to anyone with sufficient technical expertise.

In theory, this is true. In practice, it is deeply misleading.

Downloading a model is not the same as deploying it. Running modern large language models in production requires substantial compute infrastructure, engineering expertise, and ongoing operational investment. These requirements create what we term the “infrastructure tax” of open-source AI—a hidden cost structure that disproportionately affects organizations in emerging markets.

Our research across 30 countries in South Asia, Africa, and Latin America reveals that the primary barrier to AI adoption is not model availability but total cost of ownership. While model weights may be free, the compute infrastructure required to operate them can exceed $15,000 per month for large models, far beyond the budgets of most startups, universities, and public institutions.

The result is a paradox: AI models are technically open, yet meaningful access remains concentrated in a small number of well-funded organizations.

True democratization requires not only open models but affordable, managed inference infrastructure capable of delivering those models at scale.


The Open-Source Promise


The open-source movement has historically played a critical role in expanding access to technology.

From Linux to Kubernetes to PyTorch, open ecosystems have enabled developers around the world to participate in building and deploying advanced software systems.

Large language models initially appeared poised to follow the same trajectory.

The release of open-weight models suggested that anyone could download a frontier-level AI system and deploy it locally.

However, language models differ fundamentally from traditional open-source software.

Running a modern LLM requires far more than a laptop and a code repository.


The Hidden Infrastructure Tax


Operating large language models in production environments introduces significant infrastructure requirements.

These include:

High-performance GPU clusters
Model optimization and quantization pipelines
Monitoring and reliability infrastructure
Data pipelines and storage systems
Specialized engineering expertise

For example, running a 70B parameter model continuously may require several high-end GPUs, resulting in compute costs exceeding $15,000 per month before accounting for engineering and operational expenses.

For organizations in high-income economies, these costs may be manageable.

For startups and universities in emerging markets, they are often prohibitive.


Evidence from Emerging Markets


To better understand the scale of this challenge, we conducted a survey of AI developers and organizations across 30 countries.

Participants included startups, universities, government agencies, and nonprofit organizations attempting to deploy AI systems locally.

The results were striking.

When asked to identify the primary barrier to adopting open-source LLMs, respondents cited:

Infrastructure cost: 48%

Engineering expertise: 27%

Cloud availability: 15%

Model licensing concerns: 6%

Training data access: 4%

In other words, the dominant obstacle was infrastructure, not the models themselves.

Even when organizations successfully downloaded open-source models, they often lacked the resources required to deploy them in real-world environments.


The New Digital Divide


This dynamic is creating a new form of technological inequality.

In the early days of the internet, access to information was constrained by connectivity.

Today, connectivity is widespread, but access to AI capability is constrained by infrastructure.

The result is a new divide:

Organizations with large compute budgets can operate advanced AI systems.

Organizations without those resources remain dependent on external APIs or unable to deploy AI at all.

Ironically, this divide persists even in an era where the underlying models are technically open.


Managed Inference as the Missing Layer


The core insight of our research is that the democratization layer for AI does not lie in the model itself.

It lies in the infrastructure that allows those models to be used efficiently.

Managed inference platforms can dramatically reduce the cost of operating AI systems by:

Pooling compute resources across many users
Optimizing model routing and compression
Reducing engineering overhead through managed services

By distributing infrastructure costs across large user bases, these systems make it possible to deliver advanced AI capabilities at price points accessible to organizations with limited resources.


Implications for the Global AI Ecosystem


If the goal of open-source AI is genuine democratization, then releasing model weights alone is insufficient.

Without affordable infrastructure, open-source models risk becoming open in theory but inaccessible in practice.

This suggests that the next phase of AI development must focus not only on improving models but also on building infrastructure capable of delivering those models to a far broader range of users.


Conclusion


Open-source models represent a remarkable technological achievement. However, they are only one component of a much larger system required to deliver AI capabilities at scale.

Until the infrastructure tax is addressed, the promise of open AI will remain unrealized for much of the world.

True democratization will occur when advanced AI systems become not only open, but operable by anyone who needs them.

Copyright @Redrob 2026. All Rights Reserved.

English

Copyright @Redrob 2025. All Rights Reserved.

Copyright @Redrob 2026. All Rights Reserved.

English