By Jean-Philippe Beaudet 

Key Provisions  

As Congress and state legislatures advance Artificial Intelligence (AI) legislation, TDC and its members seek to clarify, expand, and stress the concept of “open AI” in design by providing a more comprehensive definition drawn from the blockchain industry. Within our community, these systems are more commonly referred to as “Decentralized AI” or “Blockchain-Based AI.” For consistency, we use the term “Decentralized AI” or DeAI. 

Importantly, Decentralized AI refers to more than open-source models. It encompasses the entire AI technology stack – including compute, data, models, and applications – built on public blockchain infrastructure. By distributing ownership, access, contributions, and governance across these layers, Decentralized AI Systems aim to enhance transparency, accessibility, resiliency, and public trust, and this multi-layered approach reimagines how AI can be developed, trained, deployed, and maintained. 

Because Decentralized AI Systems are built on public blockchains, the concept of “openness” is inherently embedded through transparent ledgers, open access, and decentralized governance. However, both openness and decentralization can vary across each layer of the system. While some components may be entirely public and open-source, others may incorporate permissioning or privacy-preserving tools depending on the use case. Nonetheless, Decentralized AI Systems overall offer unique and compelling benefits that cannot be achieved under traditional, centralized architectures. 

Below we outline the following four core layers of the Decentralized AI technology stack, each of which plays an integral role in creating a resilient and participatory AI ecosystem: 

Across those four layers: 

Computation. DeAI aggregates training and inference across many independent operators (from idle gaming PCs to small data centers) via marketplaces that route and verify jobs. This broadens access, trims costs, eases grid hotspots by spreading load geographically, and hardens resilience. No single facility or vendor becomes a choke point. Contributors are paid automatically for provable work, unlocking latent capacity at scale. 

Example: A biotech startup splits a large protein-folding job into micro-tasks that idle gaming PCs in 40 states process overnight, paying each node a few cents in crypto for its GPU time. When a surge of demand hits Europe the next morning, spare servers in university labs automatically join the mesh, scaling the cluster without a single new data- center rack. 

Data. Decentralized storage scatters encrypted shards across multiple locations, boosting availability while removing single points of failure. Providers can opt-in bandwidth or datasets and receive compensation. With zero-knowledge techniques and trusted execution, models learn from sensitive data without exposing it, enabling privacy-preserving AI in high-stakes domains like healthcare and finance. 

Example: Cancer patients opt in to share anonymized MRI scans that are sliced, encrypted, and scattered across hundreds of storage nodes; researchers query the dataset with zero- knowledge proofs that reveal insights but never raw images. Each time the dataset powers a published paper, the smart contract behind it releases token rewards to the original donors and storage providers. 

Model. Distributed version control and on-chain governance make every fine-tune, weight change, and policy toggle auditable. Usage-based rewards flow to creators and curators, aligning incentives for openness and quality. Public lineage (what data, which versions, who approved) replaces black-box opacity with transparent accountability and fast rollbacks when issues arise. 

Example: An open-source language model lives on a permissioned blockchain where every fine-tuning commit, weight change, and contributor wallet is publicly logged; governance tokens let the community approve or roll back updates. Whenever an app calls the model’s API, a micro-payment is auto-split among all recorded contributors, turning transparency into ongoing revenue. 

Application. Builders compose the above layers to ship agents and apps that spin up inference wherever capacity is cheapest or closest to users. Startups and SMEs get enterprise-grade capability at low marginal cost, while royalties and revenue sharing flow automatically to upstream contributors to fuel a sustainable, participatory AI economy. 

Example: A lightweight AI agent built on the above-described shared models handles real- time customer support for thousands of small e-commerce sites, spinning up inference jobs on whichever community GPUs are cheapest at that moment. Because the cost is pennies per chat, even a two-person shop can deploy enterprise-grade AI while the agent’s creators earn royalties every time it solves a ticket. 

Bonus: Decentralized Operating System. A combination of each of the four layers form a decentralized AI operating system (deAIOS)—a secure, verifiable, and composable foundation for AI development that provides not only access to models, but also tamper-resistant, provable AI development environments. It should be noted that while fully Decentralized AI Systems incorporate decentralization at each layer of the stack above, there are many methods of combining decentralized and centralized layers that still allow for the benefits of decentralization. 

Conclusion 

With this explanation in place, it becomes clear that Decentralized AI Systems represent a fundamentally different approach to building, operating, and governing artificial intelligence. By distributing control across infrastructure, data, models, and applications, these systems offer unique advantages in terms of transparency, security, scalability, and resilience. 

This piece is part of an ongoing series and is substantially pulled from TDC’s Bipartisan House AI Taskforce Report on Artificial Intelligence—Open & Closed Systems, published in June 2025.