AI & DePIN is Building the Future of Mobility: A Conversation with DIMO’s CTO, Yevgeny Khessin
““DIMO’s goal when we started in 2020 was to fix all the problems that had occurred in the 10 years prior in the mobility space.””
Decentralized Storage Alliance's Ken Fromm had the opportunity recently to sit down with Yevgeny Khessin, CTO and founder of DIMO.
They dove deep into decentralized physical infrastructure (DePIN) and decentralized storage and how DIMO is powering the future of the automotive industry.
Welcome Yevgeny! Thank you for taking the time today. Please introduce yourself to our community.
Of course. I go by Yev for short. I was born in Ukraine and came [to the US] about 25 years ago. Ever since I was a kid, I have always loved cars and so, unsurprisingly, after college I joined the automotive industry. I have been working in the connected vehicle space – building integrations with EVs, mobile applications, and other automotive services for years all with the goal of making my favorite asset – the car – as smart as your phone. We started building DIMO over five years ago with the goal of bringing connected vehicles to the masses and allow people to build applications and services within an open ecosystem.
Starting a company is not an easy thing. What got you to the point where you wanted to take that jump?
This is actually not my first startup. I started a mobility consultancy firm in 2016 which we grew from two people to 30 – mostly working with Ford. Funny enough, there’s a correlation between this first startup and this second one. Back in 2016, there was a lot of buzz around mobility and everybody had this vision of making transportation more efficient by making the car smarter, but unfortunately we didn't quite achieve it.
DIMO’s goal when we started in 2020 was to fix all the problems that had occurred in the 10 years prior in the mobility space. What the industry lacks is an ecosystem which is an approach that is net beneficial to all – as opposed to being a competition of who's going to get more vendor-locked customers onto their API. We've been building DIMO for about four plus years now based on this premise.
Can you explain the DIMO sites – DIMO.co and DIMO.org?
DIMO.co is our consumer offering and it’s where our mobile app resides. We use that to build the platform, validate it, and provide users with value who don't have a connected vehicle today. There's also
DIMO.org which is an open source project/protocol. It primarily resides on GitHub today and uses the website as the front-end/landing site for the protocol itself.
Can you explain what DIMO is?
DIMO is really an ecosystem. At the core level, it's a protocol that creates and supports an ecosystem. We've built all the plumbing in order to be able to ingest and store data – and most importantly – with users' consent, share it to a third party. The protocol, itself, can handle any kind of IOT asset – it could be a doorbell, it could be a scooter – but we've been heavily focused on cars because it's the one that can have the biggest net benefit to society.
Can you explain what data you're collecting, how you're collecting it, and ultimately the problems you're solving?
The DIMO protocol is open source and one of its features is a module called the Oracles, which allows anybody to build a data integration from their data source to a DIMO storage node. The integrations that we've built in in the open source include the Tesla API which you can find on GitHub. It lets any Tesla driver upload their vehicle to DIMO and start storing their data.
Another one is an integration with an aftermarket device company. If you have a car that is older, you might not have any of the telematics readily available to you. We allow you to buy a device to ingest and store data from this aftermarket device.
As for the data we’re collecting, vehicles run a protocol called the CAN bus. You can think of it as being just like your home wifi network. A car has its own local network where there is somewhere between say 50 to 200 data points that are easily collectible from the car.
Most of it is things that you would expect, such as speed, engine RPMs, location, tire pressure, which gear you're in, the odometer all the way through, is the trunk open, and is there enough diesel exhaust fluid in the car. All of these data points get stored in the DIMO node. With the protocol and the ecosystem, we're enabling people to build applications on top of this individual and aggregate data.
Maybe you’re a fleet company that has 200 vehicles for deliveries or trucking, logistics, whatever it may be, and you care about vehicle health, you care about where your drivers are, and you care if your deliveries are going on time. All of this data can be used to build a fleet application.
The best example that we're working on today that I think explains it clearly is let's say you're signing up for insurance in the world of pay per mile – like Metro Mile for example. What you're really agreeing to is I am going to give the insurance company access to my harsh braking, harsh acceleration and top speed and my approximate location on the other side. What you're really agreeing to is pay the insurance company an X amount of USDC per month or per mile driven.
What DIMO enables is this single unit agreement, which is bi-directional flow of data and payment that is signed by the wallet of both the user and the insurance company. It's a beautiful single source of truth that either side can cancel at any moment in time. All of the other pieces of the web3 finance infrastructure to then be used to onboard money in or offload it out.
There's also a gamification side of things. We have a rewards engine that you can use to provide incentives to users. So for example, maybe you want to build an application that verifies that you went to a Speedway station or another gas station or dealership and provide you a kind of airdrop reward for going to this location. We have a protocol module for this. These are just three of the many use cases that are live today.
What makes DIMO different from the others in the space? How do you set yourself apart?
Every other connected telematics system that exists today is a closed ecosystem. To be honest, I wouldn't even call them ecosystems. I would say they are SaaS products with preselected applications. DIMO is built on an open ledger, which means it's accessible anywhere globally. The smart contracts and the vehicle identity is accessible to anybody and anyone could use this to build on top of. Because of this, we have SDKs that most connected systems do not have.
We also have privacy built in. Part of building a protocol that's focused on IOT data sharing is how do you really gather consent in a secure way? This is something that most companies have done in a very basic sense where maybe there's one button you click to share data, but it is really enforced at the front door only. DIMO’s ecosystem has privacy at the core based on the private key signatures and a ledger system that allows everyone to know that user's privacy will be preserved – whether it's the application developer or the user themselves.
And so you've created an open data marketplace where the data is owned by the car owner (or maybe it’s a truck owner or a fleet owner) and where people can let that data be used for various purposes. Can you explain where you see this going?
Interestingly enough, when people talk about data marketplaces, they usually talk about one-way value flow – with data going one way [and value being captured by the aggregator]. With DIMO and the privacy model, you can build a data relationship and a cashflow relationship heading both ways. It becomes bi-directional.
Let's take something that people don't like doing and that is servicing your vehicle. When you use DIMO to permission the diagnostics and create a relationship between the repair shop and your car, there's a couple of bi-directional data flows that are happening.
One is the mechanic wants to know the problem your car is having so they can help you fix it. They want to know what error codes are occurring and so that's the data heading to them. Now, obviously they would like to get paid when they resolve the problem. Because DIMO is on a ledger and we've set up this bidirectional relationship, you can actually use onchain payments to pay for this transaction. What's even more interesting is the DIMO protocol allows the mechanic to issue an attestation that they serviced your car and that the problem was resolved. This enables the ecosystem to really thrive because the next time the user shows up, possibly a different mechanic, different dealership, maybe they want to go sell their car, they're actually then able to permission all of this data to these parties.
The system builds on itself because next time I show up, I have more data than I had before. The dealership or the service center six years later can still access this data. And when I go to sell the car, I have a perfect history of my vehicle that I can use to price the vehicle better than the automotive marketplace can use to price it.
Do you sell devices and where's the data stored? How do you verify the data as being from the car? Let's go through some of the technical aspects and why again, what makes DIMO unique.
The uniqueness does not come out of devices themselves. Every device that is needed to collect vehicle data already exists today. By the way, our goal is to build an ecosystem. It is not to sell lots of devices. We provide a mobile application and a plug-in device that you can buy on Amazon. There is a data storage node that is hosted in AWS by a third party and it can be accessed via them and also via the protocol’s telemetry.
Anyone can host one of these nodes and as a DIMO user of the protocol, you can actually change in the smart contract where your data is going to be stored. There is a second node getting spun up in Europe as well. The system functions the same way, just that the data storage will be in a different location.
The way we verify the data is via a mix of attestations. For example, every DIMO device signs data with the key in a secure element. You can think of it as we've added a Trezor [wallet] to your car. All the data that's coming from the vehicle is signed via the wallet in the vehicle which is then used by the protocol to verify the data source. We call this particular attestation proof of vehicle.
Why is it important to have this form of trust and verification?
My answer here has actually changed over the years. When you have an open system, it opens up various gaming vectors, right? DIMO is an open system. Anybody can call the smart contracts. And as we move into this world of AI it means that AI can now generate infinite data.
You're already seeing how easy it is to generate a video of somebody saying something they have never said. The same thing is going to reach the IOT space where it will be possible to generate IOT data at scale. You're going to need to verify somehow that this data has actually been generated for real, especially for insurance purposes but for many other purposes as well.
The second part is this open approach. Every single IOT system that exists today is closed source. With DIMO, the smart contracts and the ledger are accessible to all. You can have an ecosystem of assets that interoperate, that people can freely build on, that people can build solutions for user problems in a way that could not be done before.
[With the old way,] you have to go to one automaker, one charging company, one smart home robot company at a time to integrate all these things. Now it can all be done in an ecosystem approach where things are built on each other and what one developer builds helps another. One data point can be used with five to six different applications.
How do web3 innovations like tokens and ledgers make this possible? Why can't you do this with just traditional cloud and web2 approaches?
I think it comes back to the problem space. A car today is built in Japan, shipped overseas to the US, leased to someone in Florida, driven and serviced and insured for five to six years, and then sold as a secondary secondary market to Europe where it once again gets serviced and used. So that's the reality of the world today, right? Enabling what I just described of the lifecycle tracking of creation of the vehicle – if it was done in a server that's only available in Japan – it wouldn't function by the time this car made it to US shores.
Once you're in the US, if there is not an open ledger that people can build on, you're going to be left with the world we have today, which is the only applications you can use, the only ecosystem available to you is the one that the car maker allows you to use. And then when the vehicle goes to Europe, again, all this data, all these connections, all these applications are no longer accessible to you because in a different region with different rules, different laws. But the ledger and the DIMO network are global, the privacy guarantees are global and the applications can be global because of it. And so there’s this unlock of using a distributed ledger in terms of the user value.
There's also the privacy and verification aspects. Being able to build a system where you can verify that a user shared their data with this application is something that's very important for regulators, for users, even for developers. Developers, for example, want to remove the liability of how the data was made available to them. They want to be able to say, “This is when the user gave me data and this is how I acquired it.”
This is hard to do with a centralized SaaS business because the only proof is coming from the inside, whereas with DIMO, all the transactions are user initiated. You can see the exact timestamp when they sign the transaction to share their data. You can see the timestamp when the vehicle was manufactured and when it was first signed into the network.
Talk about digital currency as a native primitive means in the context of mobility and transportation.
The finance side is very interesting. People have been trying to figure out how to do automated changing for gas, EV charging, and parking for the better part of 10 years now. But as far as I know, there's nothing I can buy in the market today that actually enables this.
The automotive world has proposed a few different specifications that were, to be honest, way too complicated to go to production. You can't issue a Visa credit card to a non-human, for example, which is a funny limitation of the web2 finance industry.
But the web3 finance stack enables this. It can give a vehicle a wallet and it can give the charging meter a wallet. Instead of this impossible world that exists today where an EV driver has to essentially install six applications and add a credit card in six places just to be able to charge their car, we can give a wallet to every car and to every charger. This can just be done automatically on web3 rails with no new Visa credit cards, no new complexity, no new hardware.
What kind of reaction are you getting from people in the industry? As you explained the vision, are people getting it?
We're in conversations with insurance companies and automakers to integrate more natively with them. It's something that is going to take a little bit of time to get to. If the moment occurs that DIMO is natively integrated in a million cars per year coming off the assembly line, then we’ll effectively have transportation and mobility on a ledger.
At the moment we're running POCs and pilots with these companies and so they're getting it, but it's going to be a bit of time until you see the mass scale that the finance industry is starting to see. It took, what, four years from about when DeFi summer first happened to seeing Stripe announcing that they're going to run their own L1 and having these beautiful onramps and offramps.
For us it's very similar. You're going to start seeing these automotive and mobility style companies integrating DIMO over the next three to four years, but it's going to take some time.
The old saying it's an overnight success that took five to 10 years.
I would say. I mean there will be successes before then, but the joke we always make internally is that DIMO will be replaced by teleportation. That's the moment that I would love to get to.
But if I could speak to one thing specifically, which is that for automakers, using smart ledgers for mobility is not a new concept. GM looked extensively at this, and Toyota's blockchain lab has been publishing white papers on this topic as well. But it's going to take time and it requires the village to build a new kind of ecosystem.
DIMO is really seeking to do something with an industry level of scale to it, which is to enable drivers to bring their data and share their data in a way that they control to build this future ecosystem of mobility applications focused around privacy and user control and putting the driver first.
Having data that's owned by the users, that's stored securely, that can be verified as such, and that can be used with permission for a variety of uses – that sounds like a really good vision. How do people get a hold of a DIMO device, where do they go?
If you want to join DIMO as a user and you're interested in getting vehicle analytics insights on your car, you can download our DIMO mobile app today. If you have a Tesla, you can connect just via the integration in the mobile application. If you have a car from other brands, you can purchase our device on Amazon and connect your car that way.
If you're a developer, we have our developer documentation and a developer console. It's an open source project and so we have over a hundred repositories that comprise the DIMO ecosystem. Feel free to go there and start contributing or even just star it to like and follow. I would say these are the main areas.
To do a POC or otherwise get in touch with us, you can join our Discord as well as email us at developers@dimo.org.
We really appreciate Yevgeny taking the time to speak with us at DSA, the Decentralized Storage Alliance.
Make sure to follow: @DIMO_Network
Visit their site: https://ai.dimo.co/
Developer site: https://www.dimo.org/
DIMO store: https://dimo.co/products/dimo-lte-r1
DSA Position Paper – Data as an Asset Class
How Decentralized Storage serves the new data as an asset class model
Introduction
In 2006, the phrase “Data is the New Oil” became part of the lexicon and with it a growing interest in the value of digital information and data. What soon followed was expansions on this analogy such as, “like oil, data is valuable, but if unrefined it cannot really be used”. In the decades since, enterprises and institutions have been struggling to learn how to monetize their data all the while continuing to store vast amounts of data.
The advent of AI, autonomous vehicles and robots, and other data-driven innovations has renewed interest in this topic to the point that legislation is now being proposed and adopted around the world that enumerates how data can be leveraged as an asset class, along with greater specificity as to the rights of data owners.
In fairness, though, given the concerns surrounding these innovations, data might more aptly described as ‘uranium yellowcake’ rather than oil, in that it requires proper safeguards and controls before it can be processed into ‘hugely valuable single sources of truth’.
The purpose of this paper is to make readers aware of the strategic opportunities with data being more properly treated as an asset as well as better align market requirements for decentralized data storage and access with network and protocol design and development.
This document briefly describes the characteristics of decentralized storage of data to qualify as a legal asset and the properties required in a system to provide value to data owners and custodians. We will include a short description of the categories of data and the differences in handling that data. Finally, we will propose some examples of how existing technology can be used to create such an initial system, along with expected advancements which will meet the Web3 promise of “Read, Write, Own”.
Legal Requirements
Brazil and UAE have both created regulatory frameworks for data as an asset class which are expected to help shape global frameworks for how data is defined and controlled as well as the rights that may be bestowed upon its creators and/or owners. This enhanced focus on data regulatory controls looks to address the implications of AI on data usage, rights and liabilities. And while the use cases typically concern bodies of information that might be used to inform a Grok or ChatGPT, the growing ability for AI models to synthesize and make use of a wide variety of data sources makes these frameworks noteworthy to all data users and owners.
Key concepts underlying a data asset:
Data type categories
Data ownership (data subject in privacy terminology)
Data provenance (chain of custody to original creator)
Data access control
Additional concepts this paper doesn’t cover include:
Data monetization
Data retrievability latencies
Data licensing
Data liability
Data trade barriers
Data Characteristics and Properties
Data Sensitivity Categories
Broadly, we can think of four categories that determine the way data is handled.
Public data - data that is free to use i.e. in the public domain without a declared owner. One example would be an 1880s edition of the works of William Shakespeare, another is government legislation that is expressly placed in the public domain on its creation.
Private data - data that is owned and controlled by a person or entity, exposure would not be catastrophic. Examples would be streamed music or videos that are sold or rented online.
Secret data - data that is owned and deemed highly confidential and the unauthorized exposure of which would be very damaging. Examples of this include personally identifiable information and individual healthcare records.
Critical data - this is data that is so sensitive it would be stored offline and any exposure would be catastrophic. Examples of this would be pre IPO data plans and documents intended for SEC filings, or the formula for Coca-Cola.
Data Ownership
Ownership of data entitles the data owner to classify the data types as enumerated above or as they see fit. It grants the owner the right to control the usage and access to their data and the degree of controls they see necessary to protect their data. In the context of decentralized data storage, this necessitates the ability of the data to be identified positively with an owner. [Proof of Ownership]
Data Provenance
Data is not static; it can change hands, it can move around, it can be copied. For data to have legal status as an asset, some form of provenance or chain of custody must be in place. In the old Web2 world, the concept of ‘possession is nine-tenths of the law’ simply meant that if you stored data in your data center or with a cloud provider, then you would be able to claim ownership. This does not work in a decentralized Web3 environment where data may move among decentralized physical infrastructure providers. A proof of ownership linked to a proof of content is far more suitable.
Data Access Control
In addition to public data, data access needs to be controlled by the owner of the data, whether to limit access or to track access. This is a mature field in the Web2 world where centralization controls access, but in the Web3 world, decentralization makes this much more challenging. Access control and management are absolute requirements for the concept of owner control and legal claims.
Example System Architecture
This section is not intended to be proscriptive or propose a specific solution, but rather to give ideas for how to create a system that would meet the criteria of any asset class that satisfies market and legal requirements. I am ignoring public data (since it falls outside the full definition of an asset class) and critical data as it is outside of the scope of this document.
One of the best Web3 examples of existing proof of ownership is the widespread use of NFTs. While people often think of an NFT as an immutable storage of digital art, it is a certificate of ownership of art or anything that can be referenced digitally.
A non-fungible token (NFT) is a unique digital identifier that is recorded on a blockchain and is used to certify ownership and authenticity. It cannot be copied, substituted, or subdivided.
The beauty of NFTs is that they can be sold or traded or used to denote a series of objects, such as a collection of prints, e.g., number 25 of 100 prints. This is quite important for an asset class to trace ownership or royalties for an object along with its history (provenance) of its lifecycle.
To review, in order to lay the groundwork for digital data ownership, we first start with data that has value to the owner and can be controlled and positively linked to the owner. This allows the legal frameworks to be solidly built around the intellectual property rights of any creation by a person once it has been saved or sent over digital media (memory sticks, drives, texts, video calls, streaming, etc). In today's world, you typically lose your rights when you post your data online to a third party system.
In a Web3 context, exercising ownership requires a digital identity. Having a secret key or online ID and password not only jeopardizes the security of data ownership but also makes it unwieldy and prone to loss and misuse. Decentralized IDentifiers (DIDs) are a current way to properly associate an identity of an owner with an asset (whether NFTs or the data assets themselves).
Next, how do we connect the NFT (or other certificate of ownership) to an actual data asset? Luckily in the Web3 world we are familiar with the concept of Content IDentifiers (CIDs) which mathematically prove that digital content is authentic and can be associated with an owner. In the legal world, the oldest (date and time) documented artifact determines ownership rights.
DID → NFT→ CID = legal claim of ownership or custodianship.
Finally, to meet market requirements for monetization and confidentiality, we must provide methods of access control and proof of access. Most current methods require some form of centralization, but decentralized access control is still in its infancy. Tools such as smart contracts can be used to manage policies; however, enforcement of data access needs to be baked into the system and specified as a standard.
Summary
As of this writing, decentralized storage has focused on the technical needs to store data without regard for legal rights of data owners. Building a system that is both easy to use and provides legal protections is critical for widespread market adoption. The challenges are not insignificant but these four key concepts outlined above need to be taken into account to meet the desirability of any Web3 based system.
The view of Data as an Asset as a theme is one that will define the rest of the decade and the years to come, not only as the breadth of what data means becomes fully understood but also because of the derived uses. Of which we only see a small fraction so far.
X Space Recap - IoT, Telemetry, Sensor & Machine Data w/ Filecoin Foundation IoTeX, WeatherXM, and DIMO
"NOAA allocates roughly $1.2 billion each year to gather weather data. WeatherXM can deliver comparable coverage for about 1% of that cost."
With this comparison, WeatherXM Co-Founder Manos Nikiforakis set the tone for Our X Space, “IoT, Telemetry, Sensor & Machine Data”, hosted alongside Filecoin Foundation.
For July’s Space, we brought together:
Manos Nikiforakis (Co-Founder, WeatherXM)
Yevgeny Khessin (CTO and Co-Founder, DIMO)
Aaron Bassi (Head of Product, IoTeX)
Over 60 minutes the group examined how decentralised physical infrastructure (DePIN) is reshaping data markets. They walked through proof chains that begin at the sensor, incentive models that discourage low-quality deployments, and programmes that reward EV owners for sharing telemetry.
DePIN is really on the rise. Networks are operating, real users are earning rewards, and real enterprises are consuming the data.
The following excerpts highlight the key moments from our recent X Space, diving deep into the insights shared.
Turning vehicle telemetry into real savings
Drivers who join DIMO first mint a "vehicle identity", a smart-contract wallet that stores their car's data and keeps them in charge of access.
From there they can release selected metrics to approved parties. A leading example is usage-based insurance: share only charging-session records and receive an annual rebate.
As DIMO's CTO Yevgeny Khessin explained, the programme with insurer Royal pays "$200 back per year" for that data.
Each transfer is cryptographically signed, so ownership stays with the driver while the insurer receives verifiable telemetry.
This demonstrates how DePIN turns dormant signals into economic value—and why the underlying infrastructure matters.
The ledger as the "handshake" layer
The role of the ledger becomes clear when you consider interoperability requirements. It serves as the shared source of truth for composability between systems. New patterns emerge, such as vehicle-to-vehicle payments.
In a web2 world, that would have required coordination across Visa and multiple automakers. On a blockchain, the payment is straightforward because the parties meet on the same ledger.
The same "handshake" applies to data sharing. Yevgeny went on to highlight that using the ledger as the agreement layer, with private-key permissions, creates a system where only the owner can authorise access. This addresses recent controversies over car data being shared without clear consent.
Khessin summarised the benefits as ownership, interoperability, and verifiability. Looking ahead, machines will need to verify one another before acting on any message. A public ledger provides the audit trail for those decisions.
"Using a ledger as the source of truth for interoperability ... allows you to enable use cases that just weren't possible before."
This sets up the shift from isolated "data markets" to the protocol plumbing: identity, permissions, and payments negotiated on a neutral, verifiable layer.
Building the infrastructure: IoTeX's physical intelligence ecosystem
IoTeX positions itself as the connector between real-world data and AI systems through "realms". These are specialized subnets that allow DePIN projects to reward users with multiple tokens: their native token, realm-specific tokens, and IoTeX tokens.
Example: a mobility realm can reward a driver in both the project's token and an IoTeX realm token for verified trips, with proofs carried from device to contract.
The technical challenge is ensuring data integrity throughout the entire lifecycle: from collection at the device, through off-chain processing, to smart contract verification on-chain. Each step requires cryptographic proofs that the data hasn't been tampered with.
IoTeX tracks this entire chain to ensure enterprise customers can trust the data they consume. This becomes critical as AI systems increasingly depend on real-world inputs for training and decision-making.
Incentives as quality control: finding shadows, fixing stations
WeatherXM demonstrates how economic incentives can drive data quality at scale. They tie rewards directly to measurable performance metrics.
They start at the device level. Stations sign each sensor payload with secure elements embedded during manufacturing. Location verification uses hardened GPS modules paired to the main controller. This makes spoofing significantly more difficult and lets the network trace readings back to specific units and coordinates.
Deployment matters too. Operators submit site photos for human review, with machine vision assistance planned for the future.
Then comes continuous quality assurance. WeatherXM analyses time-series data to detect poor placements. Repeating shadows signal nearby obstacles, not moving cloud cover. Similar pattern recognition applies to wind and other environmental signals.
The incentive mechanism is direct: poor deployments earn reduced rewards. The system writes on-chain attestations of station quality over time, making performance transparent to participants and customers.
“We analyse time-series data to identify repeating shadows. That signals a placement issue. Poorly deployed stations receive fewer rewards, and we record quality on-chain.”
This approach addresses a persistent problem with legacy sensor networks. Many government and corporate deployments struggle with maintenance costs and quality control. Making quality transparent filters good stations from bad without requiring a central gatekeeper.
WeatherXM extends this concept to forecast accuracy, tracking performance across different providers and planning to move those metrics on-chain as well.
AI agents and the future of automated data markets
Manos described WeatherXM's implementation of an MCP client that allows AI agents to request weather data without navigating traditional API documentation.
More significantly, they're exploring x402 protocols that enable agents to make autonomous payments. Instead of requiring human intervention with traditional payment methods, an AI agent with a crypto wallet can pay for historical or real-time weather data on-demand.
This capability unlocks new use cases around prediction markets, weather derivatives, and parametric insurance—all operating on-chain with minimal human intervention.
As synthetic content proliferates, the value of verifiable real-world data increases. AI systems need trusted inputs, and the cryptographic proofs that DePIN networks provide become essential infrastructure.
The compounding effect
These developments create a compounding effect. Shared protocols reduce coordination costs by eliminating integration friction. Transparent incentive alignment drives up data quality across networks. Participants capture direct value from resources they already own.
The models discussed show machine data evolving from siloed exhaust to a trusted, priced asset that enterprises consume for real business applications.
Decentralized Storage Alliance opens our Group Calls for technical discussions on AI, DePIN, storage, and decentralization. If you're building in this space, we'd welcome your participation in our ongoing research and community initiatives.
Keep in Touch
Follow DSA on X: @_DSAlliance
Follow DSA on LinkedIn: Decentralized Storage Alliance
Become a DSA member: dsalliance.io
What the Tragedy at the Library of Alexandria Tells Us About The Importance of Decentralized Storage
Alexandria, 48 BCE. In the chaos of Caesar's siege, flames meant to block enemy ships licked inland, reaching the Library's waterfront storehouses. Papyrus burst like dry leaves; ink rose as black smoke. Thousands of scrolls that covered topics ranging from geometry, medicine, and poetry vanished before dawn. One blaze, one building, and centuries of knowledge were ash.
The lesson landed hard and fast: put the world’s memory in a single vault, and a single spark can erase it.
Why the Library was a Beacon for the World’s Minds
Alexandria's Library powered the Mediterranean’s information network. Scholars and traders from Persia, India, and Carthage streamed through its gates. Inside, scribes translated and recopied every scroll they touched. India spoke to Athens; Babylon debated Egypt, all under one roof.
Estimates place the collection anywhere between forty thousand and four hundred thousand scrolls. However, the raw tally matters less than the ambition: to gather everything humankind had written and make it conversant in one place. Alexandria became the Silicon Valley of the Hellenistic world. Euclid refined his Elements here; Eratosthenes measured Earth’s circumference within a handful of miles; physicians mapped nerves while dramatists perfected tragedy. Each scroll was a neural thread in a vast, centralized mind—alive only so long as that single body endured.
How Did Centralization Fail?
All knowledge sat in one building. Four separate forces struck it. Each force alone hurt the Library; together they emptied every shelf.
1. War and Fire
One building held the archive, and one battle lit the match. When Caesar's ships burned, sparks crossed the quay, and scrolls turned to soot in hours.
2. Power Shifts
Rulers changed; priorities flipped. Each new regime cut funds or seized rooms for soldiers. Knowledge depended on politics, not purpose.
3. Ideological Purges
Later bishops saw pagan danger in Greek science. Statues fell first, then shelves. Scrolls that clashed with doctrine vanished by decree.
4. Simple Neglect
No fire is needed when roofs leak. Papyrus molds, ink fades. Without steady upkeep, even genius crumbles to dust.
Digital Fires Happening Today
So what has been happening in modern times that reminds us of the Library of Alexandria? A few examples come to mind.
On 12 June 2025, a misconfigured network update at Google Cloud cascaded through the wider internet. Cloudflare's edge fell over, Azure regions stalled, and even AWS traffic spiked with errors as routing tables flapped. Music streams stopped mid-song, banking dashboards froze, newsroom CMSs blinked "503." One typo inside one provider turned millions of screens blank for hours. This was proof that a single technical spark can still torch a vast, centralised stack.
Just three days earlier, on 9 June 2025, Google emailed Maps users about a different erasure notice: users must export their cloud-stored Location Users should keep any data older than 90 days in their history, or it will be deleted. Years of commutes, holiday trails, and personal memories just vanished. The decision sits with one company; the burden of preservation falls on each user.
Then in July 2025, a Human Rights Watch report detailed how Russian authorities had “doubled down on censorship,” blocking thousands of sites and throttling VPNs to tighten state control of the national internet. What a citizen can read now hinges on a government flip-switch, not on the value of the data.
Today's Solution to Alexandria is On-Chain
To shield tomorrow's knowledge, there are three key layers three complementary systems.
The first layer is IPFS. This scatters every file across a network of peers, allowing any node to serve the content through its cryptographic hash, transforming the network into a digital bookshelf. On that foundation lies Arweave. Paying once ensures the data gets etched into a ledger designed to outlast budgets, ownership changes, and hardware refresh cycles.
Finally, Filecoin provides ongoing accountability. Storage providers earn tokens only if there is continual proof that the bytes remain intact. Together, redundancy, permanence, and incentives ensure that an outage, a policy change, or a balance-sheet crunch becomes a nuisance instead of a catastrophe.
The Lesson in Hindsight
Alexandria burned because the best minds of its age had no alternative. Scrolls were physical, copies were scarce, and distance was measured in months at sea. Today, we live in a different century with better tools. If knowledge still disappears, it is by our choice, not our limits.
Every outage, purge, or blackout we suffer is a warning flare: the old risks are back, just wearing digital clothes. However, the cures already run in the background of the internet we use daily. We decide whether to rely on single servers or scatter our memories across many.
While this historic event is something of the past, the lessons we can learn from it are ever more present in today’s society. The idea of decentralized storage may pass you, but when applied to the story of Alexandria, it’s hard not to ignore.
Keep In Touch
Follow us on X: https://x.com/_DSAlliance
Follow us on LinkedIn: https://www.linkedin.com/company/decentralized-storage-alliance
Join DSA: https://www.dsalliance.io/members
The Rapidly Changing Landscape of Compute and Storage
Why Decentralized Solutions Make More and More Sense
The Decentralized Storage Alliance presented a panel in June alongside Filecoin Foundation featuring some of the top names in the decentralized physical infrastructure space (DePIN), including representatives from Eigen Foundation, Titan Network and IO.net.
The panelists were Robert Drost, CEO and Executive Director, Eigen Foundation, Konstantin Tkachuk, Chief Strategy Officer and Co-Founder, Titan Network, and MaryAnn Wofford, VP of Revenue, at IO.net, moderators by DSA’s Ken Fromm and Stuart Berman, a Startup Advisor and Filecoin network expert. The event was hosted by the DSA and the Filecoin Foundation.
One of the points made early in the discussion was about the growing consensus on what the nature of DePIN is and why it’s so important. DePIN approaches not only move governance of compute and storage resources to the most fundamental levels, they also create global markets for unused and untapped compute and storage. These resources can be within existing data centers, but even in consumer and mobile devices.
Panelists were also quick to agree that the benefits of these new approaches are being realized right now with exceedingly tangible results. Instead of traction being far off in the distant future, it was clear that developers are using DePIN infrastructure now and benefiting from it.
Other topics explored included their strategies for gaining institutional adoption as well as their visions of DePIN in the future. Here are some highlights of the conversation.
On The Benefits of Their Solutions
Robert Drost, Eigen Foundation
EigenLayer has three parts to its marketplace that allows ETH and any ERC 20 holder to restate their tokens in EigenLayer and direct it towards any DePIN as well as any cloud project that wants to have elastic security and ultimately be able to deploy not just decentralized infrastructure but verifiable infrastructure.
By restaking tokens and securing multiple networks, users create a shared security ecosystem that allows for the permissionless development of new trust-minimized services built on top of Ethereum's security foundation.
MaryAnn Wofford, IO.net
IO.net hosts 29 of the biggest open source models on our platform through one central API. That's one build that you have to have, and then all of a sudden you have all of these various models that you could test your applications with. We provide free tokens to gain access to models. We'll give you a certain amount a day and with some models like DeepSeek, we'll give you a million.
IO.net is one of the unique players that allows you to pay in crypto as well as gain rewards for doing so. For example, many organizations that have excess capacity with GPUs can post these to IO.net. They can say that their GPUs are available and they will get a reward from us because we are leveraging the Solana crypto network to support resources.
This decentralized approach gets you away from single siloed vendors who want to take over the entire marketplace and be defined as early winners. At the same time, we give the power to developers to create new alternatives, new options, new functionality, and bring new things to market, which is what we all live for and want to bring to our daily lives.
Konstantin Tkachuk, Titan Network
Titan Network is a DePIN network building an open source incentive layer that supports people across the globe to aggregate idle resources and power process globally connected cloud infrastructure. Essentially we help people to share data idle resources and we help enterprises to leverage those idle resources for their benefit at a global scale.
A powerful feature of the Titan Network is the ability to return the power to people to benefit from the growth of AI storage, compute, and beyond with the devices that they already have and they already own. We have this ability to share your resources, whether you have a personal computer that you want to share – which is how Bitcoin allowed you to do so in the early days – to providing your mobile phone for Titan Network applications to run TikTok and other CDN-like services right from your device.
On the Need for DePIN
Robert Drost, Eigen Foundation
DePIN is a whole re-imagining of what the public cloud looks like. The public cloud is something that in many ways looks decentralized but has actually become pretty centralized in terms of the governance and the control as well as the API software stack that AWS, Azure, and Google Cloud offer.
It's part of the reason why it's so hard for somebody to write software once and deploy it on all three clouds. It's also led to a lot of pricing inefficiency and also led to data being stuck in certain clouds. All of the cloud providers love data to go in for free and then leaving the cloud costs about $40 per terabyte for bandwidth, which some people have estimated in the US is approaching a 99% gross margin, meaning that it's a hundred times the raw cost.
DePIN allows us to actually move the governance all the way down to the fundamental level. It opens up the ability for anyone to run hardware using great technology like IO.net on the AI compute side and Filecon on the storage side. Blockchains give us a lot of flexibility in the ability to create and run almost any possible software to replace the current infrastructure stack – and ultimately the platform-as-a-service and software-as-a-service stacks – to one unified one across all clouds, including the big three. It's actually one big cloud of compute and storage.
MaryAnn Wofford, IO.net
To add to this, DePIN is a compute resource that is truly global because you're not really tied to a particular data center. The workloads for AI inference, for example, can be served to your end users at the point of access of where they're at.
That's one of the key things people come to us for – the ability to be agnostic and to be highly flexible in terms of compute resources. You're not dedicated to long-term contracts and you're not dedicated to one particular region. There's just a tremendous amount of flexibility as to where the access point of compute lies, creating a wider marketplace for compute resources.
Konstantin Tkachuk, Titan Network
I would also like to add that we often forget in the DePIN space that a large supplier of resources is average people around the globe getting the opportunity to share and benefit from the infrastructure that they already own. These physical devices, computers, data center-grade infrastructure, and anything in between can now be shared and the people who own these devices can get a benefit from it.
So all of this infrastructure is part of a movement that enables people around the globe to start contributing and be part of this value creation loop. Whether we talk about AI, storage, or compute – all of the amazing developments that are happening – people are able to share their resources and companies are able to benefit from this infrastructure. This is a fascinating movement that brings back the power from centralized data center-grade complexities to just people around the globe.
On DePIN Success Stories and Traction in the Market
Robert Drost, Eigen Foundation
EigenLayer is very much a channel partner play that includes B2B2C as well as B2B-to-Institutions. We have increasing relationships directly with public cloud providers. This means we're integrating and supporting their software stack. It also means that these major cloud providers are becoming operators inside of EigenLayer. We have over a thousand – many of them serious data center providers – who are happy to operate on the networks.
I think the misconception about it is that decentralized physical infrastructure means that we would not have the big three cloud providers. They can totally play in the market, they just have to actually operate and run nodes. And with EigenLayer we're seeing that we actually have operators being run by the cloud providers themselves for various [web3] protocols.
MaryAnn Wofford, IO.net
We are seeing a number of really key wins, particularly in multimodal AI applications specifically on the inference side of things – which we feel is a major use case going forward. We're working with a number of vendors who are building their own models – such as voice and imaging models – who want to make sure they're serving these in a timely manner to their end users.
We are seeing millisecond responses in terms of inference with these models and in other interactions. We know that we're just at the tip of the spear in terms of what we're able to do with AI applications and serving up multi-model capabilities, but [this use and its responsiveness] stands out as one of the key use cases we're seeing with the DePIN infrastructure that we have.
Konstantin Tkachuk, Titan Network
The biggest successes that we find is that we're enabling enterprise customers of traditional web2 services such as content delivery networks and/or compute and storage services to really benefit from the infrastructure we are aggregating at a community level. This really shows the potential of distributed DePIN systems in general where infrastructure is crowdsourced.
For example, we're actively working with TikTok in Asia and enabling TikTok to save more than 70% of their CDM cost savings, just from the pure perspective that they are connecting to user devices and using these devices as CDN nodes to share the content in Asia. This capability is possible due to our coordination layer which brings a level of transparency and a reward infrastructure all the while maintaining performance at a level, and sometimes even better, than traditional cloud infrastructure.
This is where we see a lot of shift compared to previous cycles where blockchain was perceived as something slower or more complicated to use compared to traditional cloud. I think now we are at the inflection point of blockchain where we can do the same things with the same quality or even better due to some new features that are enabled by the blockchain primitives.
On Gaining Institutional Adoption
Robert Drost, Eigen Foundation
Our success with pulling in very important platforms-as-a-service and software-as-a-service middleware means that developers are able to much more rapidly build and deploy decentralized applications.
It means that over time when we're looking at the transition, we will actually see major software providers, software vendors such as Snowflake, CloudFlare, and others will start supporting and deploying their software entirely using blockchain middleware.
It's going to inject real world revenue on the financial side of blockchain, complementing real world assets. If you look at the revenue in the public cloud space, you can see citations on the big three US public clouds of many hundreds of billions of dollars of revenue. If you add in the software and others, you start looking at trillions of revenue that's being run in the infrastructure space. Just imagine all of that being settled via tokens in the crypto space.
MaryAnn Wofford, IO.net
Our ideal customers are the developers in small teams who are testing new applications, testing new capabilities – these exist inside of enterprises and big industries.
We interact a lot with the innovation lab side of big enterprises. We interact with test and dev groups. Although we're new, there are pockets within enterprise organizations that need speed, that need a really fast way to get a project up and running to prove value to the senior executive teams without significant exposure.
We saw this working recently with a couple of oil and gas companies where they wanted to build out test models for some applications they were thinking about, and they just wanted to move quickly. They didn't want to go through the internal process of getting approval.
This is just one aspect but our true North Star and the type of profile that we want to grow our business on is really these types of developers, the small teams that want to innovate and deliver quickly and then make sure that they can continue to grow their applications and ideas out.
Konstantin Tkachuk, Titan Network
What really shines and what really helped us to capture this institutional adoption was focusing on providing the service for an enterprise that's complete and whole. The web3 space made a little bit of a mistake in the beginning where we tried to provide access to one specific resource and weren’t solving the customer problem as a whole.
Now we are all maturing and figuring out that customers need complete solutions. They don't want to figure out how to do the integration. They want to just get the service and run with it. We are at this level where we're starting to provide services to those who just need it [as is] as well as providing additional functionality on top of it for those who want to build their own infrastructure and solutions by themselves.
More and more teams will see the value in using infrastructure that is functional and will deliver more value while also being cheaper or designed in a different way. The way we are building our infrastructure is that the end user of our partners should never know whether they use traditional cloud or DePIN infrastructure.
Stuart Berman, Filecoin Network Expert
Providers like Filecoin have massive capacity. We have exabytes of data that we store in a fully decentralized fashion. Anybody can spin up a node and start storing data or it can be a client and start using Filecoin services from any one of or multiple providers across the world, irrespective of location. You can specify location or say I don't care where it's located.
Moving forward is going to take a bit of effort to get us to this vision of what we're talking about, which is that users shouldn't really care about the underlying technology, but they should care about its capabilities, its features and their ability to control systems, monetize their data, and get the most amount of value out of the information that they either own and control or are part of some ecosystem.
Public data doesn't have severe requirements around privacy and so access control is a good place to start and that's where we've made a lot of inroads, but there's a lot of work to be done to get us there where the vision of full decentralization and the promises it brings get us there.
On The Impact of DePIN on the Future
Robert Drost, Eigen Foundation
EigenLayer also supports things that you don't have in the regular web2 space, such as Proof of Machinehood, which is an analog to Proof of Humanity. For example, there was an AI coding project recently where it came out that it was all a fraud. There were 700 engineers pretending to be AI models building applications. Verifying that there's a real AI model running versus a human is actually a valuable thing.
[One of our partners] supports another great proof which is Proof of Location. You can actually see that your protocols are running across all the continents or you can provide benefits and higher rewards for people who are supporting your protocol in different parts of the world.
We also support something called “applicable security” where if there is a violation of the decentralized protocol, the stakers funds are not just burned, they are given to the participants. You can actually have what looks a little bit like insurance – although it's really more like legal restitution in that if you have losses then you will receive compensatory funds for what you lost.
MaryAnn Wofford, IO.net
Decentralization of compute and storage gives developers the tools to unlock innovation. They have certainly been siloed into a couple of players benefiting, so we could count them on our hands of the players that really have driven the recent economic model for software development.
It's who really won that battle in terms of the VCs, the hyperscalers, etc. Decentralized infrastructure provides the ability to create new things. It will provide the scale that's going to be needed with a very low cost upfront investment in order to test new capabilities such as AI models.
Konstantin Tkachuk, Titan Network
The opportunities to allow people to benefit from the growth around them is an incredible feature. Having the ability to bring people back into the growth loop on a societal level will help offset all of those fears of AI replacing them or something else replacing them.
Now they can provide services to AI, which pays them on a daily basis for running these services. This powerful inclusion of people back into the loop is something that we'll see a lot of appreciation and value, especially if we can communicate this well on various channels and levels.
Stuart Berman, Filecoin Network Expert
The core aspect of decentralization is user control. It's not entirely about distribution – it's about who can set parameters, who can set prices, who can decide who can see my data, who can process it, who can store it.
This is critical and something we've seen this in the decentralized storage space is we've gone from storing data to being able to do compute-over-data. Ultimately what are people getting out of this in terms of their value for their information?
I don't think of smart contracts as much technically as I do in terms of it being a business agreement or a personal agreement in a true legal contractual sense. You can use this data for that and I can control who has access to it and how often they have access to it and for what purposes they have access to it.
We're even seeing regulations now being developed in particular around Data as an Asset class, which gives you the owner of the data, the information-specific legal rights versus the other way around today – which is you turn the data over to somebody and they seem to possess rights to it.
Berlin Decentralized Media Summit, June 2025
The Berlin Decentralized Media event was held over two days at two different venues with Friday night's session hosted by Publix, a centre for journalism in Berlin, while Saturday morning was in the Berlin "Web3 Hub" space, thanks to Bitvavo.
On Friday MC and co-organizer Arikia Millikan introduced the event, the topic (she has been working in the space for a number of years), and keynote speaker Christine Mohan, whose CV ranges across names such as decentralized journalism startup Civil, the New York Times and the Wall Street Journal. She talked about the need for decentralization before journalism is hollowed out by a combination of budget constraints and the replacement of real investigation by increasingly syndicated content produced with little regard to its value as information for readers, the priority being to drive engagement with platforms at low cost.
The theme was expanded in a panel, where she was joined by Sonia Peteranderl and Wafaa Albadry, along with the airing of concerns regarding trustworthiness of content, the persistence of information, and increasing censorship. Setting the stage for the next day's discussion might have left the audience feeling bleak, but along with the panelists' own work, Olas' Ciaran Murray, and Filecoin's Jordan Fahle talked to us about some of the decentralized storage tools and techniques that are available to help address these issues.
The dinner was well set up for ongoing conversations, with some prompting from Miho and Paul on a range of relevant topics. Conversations inside and outside in Publix' garden and passionate discussions were visibly appreciated by the participants. Themes covered the gamut of issues introduced, as the open-ended and informal setting sowed the seeds of ongoing collaboration over a plate and a drink.
Saturday was a more substantial set of talks, panels, and sponsor presentations, addressing in turn various aspects in more depth. After being welcomed and introduced to the Web3 Hub, Filecoin's Jordan Fahle talked about how fast content disappears from the Web (often within a few years), and how Filecoin's content-addressing model of blockchain-backed decentralized storage can help not only keep content around, but help identify its provenance - a key need for quality journalism.
The day's first panel was presented by Arkadiy Kukarin of Internet Archive, River Davis of Titan, and Alex Fischer of Permaweb Journal, exploring preservation: how to ensure information doesn’t disappear from the Web, in a modern-day equivalent of burning of the library of Alexandria all too common a decision about ongoing payment for centralized cloud-based storage. The discussion revolved around censorship-resistance as well as the simple ability to protect information long-term that is a direct consequence of well-designed decentralisation protocols, but also the place of metadata in enabling persistence in a decentralized environment and the tradeoffs required because storage isn't in some magical cloud but is always in the end on physical devices in real places.
Security expert Kirill Pimenov then discussed messaging applications and their security properties, dissecting the landscape and the depressing reality of common communication systems. As well as high-quality platforms like Matrix and SimpleX, he also noted the need to consider reality - there possibly isn't so much need for concern about someone finding out what you asked your friends to bring to dinner, and it is important to be able to communicate, even if you have to think about how to do so knowing you have low security.
Christine Mohan moderated the next panel, with Liam Kelly of DL News, Journalist and AI specialist Wafaa Albadry, and Justin Shenk of the Open History project. The panel considered issues and changes that come with increasing use of AI tools in journalism. The issues ranged from the ethical use of deepfakes to protect sources to the impact on trust and engagement of AI in the newsroom, but the speakers also noted the crucial role of people and their own skills.
Among presentations describing solutions, Ciaran Murray presented how Olas uses decentralized technology to support better recognition of quality, and matching value flows.
A talk from Electronic Freedom Frontier's Jillian York, along with artist Mathana, explored the issues of censorship in more detail, looking at how media have portrayed, or censored, various issues over many decades. From so-called 1960s counter-culture to contemporary recognition of the rights of marginalised communities today, they elaborated on how decentralization has played an important role enabling the breadth of perspectives to be available and represented, as a bulwark against tendencies to repression of differences, and totalitarianism directing the people instead of the democratic ideal of the inverse situation.
Arikia presented the work of Ctrl+X to decentralise publishing and enable journalists in particular to own and continue to monetize their work, and Matt Tavares of Protocol Labs gave a talk motivated by the collaborative use of OSINT (he has an intelligence background - for the rest of us, that means "data that is available"), and how it isn't properly credited and remunerated over its lifetime, but crucially, should and could be.
A final panel of Matt Tavares, Arikia Millikan and Ciaran Murray considered the tension between privacy and transparency. With the obvious recognition that the topic isn't constrained to the world of Decentralized Media, but one where the needs and rights of the public to access important information particularly where it touches on the powerful interests that seek to regulate, moderate, or benefit from our behaviour, have to be balanced against the ability of individuals to exercise their rights through a basic assumption of privacy.
With that and the thanks for the many people who worked to put it together including the sponsors, the day formally wrapped up. However, once more the conversations continued, spilling into the surrounding spaces, outside, and further. We look forward to the next edition to continue our support for the development of decentralized media, one of the multiple fields where innovative adoptions of Decentralized Storage are bringing concrete benefits.
The DePIN Vision: Insights from Lukasz Magiera
Welcome and thank you for sitting down with us. You have an expansive vision of compute and storage. Can you explain this vision and what drives it?
Lukasz Magiera: My main goal is to make it possible for computers on the Internet to do their own thing and not have to go through permissioned systems if you don't need to. As a user I just want to interact with an infinitely scalable system where I put in money and I get to use its resources. Or I bring resources – meaning I host some computers – and money comes out.
The middle part of this should ideally be some kind of magical thing which handles everything more or less seamlessly. This is one of the main goals of digitalized systems like Filecoin and DePIN – to build this as a black box and depending on who has resources, you establish this really simple way to get and share resources that is much more efficient than having to rely on the old school cloud.
So you're looking not only at the democratization of compute and storage but also increasing the simplicity of using it?
Lukasz Magiera: There are a whole bunch of goals but all of them are around this kind of theme. There is the ability to easily bring in resources into the system to earn from those resources. Then there's the ability to easily tap into these resources.
And then there's more interesting capabilities – let's say I'm building some service or a game and it might have some light monetization features, but I really don't want to care about setting up any of the infrastructure. This is something that the cloud promised but didn't exactly deliver – meaning that scaling and using the cloud still requires you to hire a bunch of people and pay for the services.
The ideal is that you could build a system which is able to pay for itself. The person or the team building some service basically just builds the service and all of the underlying compute and storage is either paid for directly by the client or the service pays for its resources automatically via any fees while giving the rest of its earnings to the team.
It's essentially inventing this different model of building stuff on the Internet whereby you just build a thing, put it into this system, and it runs and pays for itself – or the client pays for it without you having to pay for the resources yourself. In this way, future models of computing will become much more efficient.
That’s certainly a grand vision. What does it take to get there?
Lukasz Magiera: There is a vision but it is, I would say, a distant one. This is not something we will get to at any moment. Filecoin itself took about eight years now to get to where it is and this was with intense development. I know we can get there but this is going to be a long path. I wouldn't expect to have the system I envision that's even approximately close to this in less than 10 years.
THE SOFTWARE BEHIND THE FILECOIN NETWORK
Let’s step back a bit into where we are now. You are the co-creator of Lotus, can you describe it to us?
Lukasz Magiera: Lotus is the blockchain node implementation for the Filecoin network. And so when you think of the Filecoin blockchain, you think mostly about Lotus. It doesn't implement any of the actual infrastructure services, but it does provide all of the consensus to all of the protocols that the storage providers provide.
The interesting thing is that clients [those who are submitting data to the network] don't really have to run the consensus aspect. They only have to follow and interact with the chain – and they would only do this just for settling payments as well as for looking at the chain state to see that their data is still stored correctly.
On the storage provider side, there are many more interactions with the chain. Storage providers are the ones that have to go to the chain and prove that they are still storing data. When they're wanting to store new pieces of data, they have to submit proof for that. Part of the consensus also consists of mining blocks and so they have to participate in that block mining process as well.
Can you explain the difference between Lotus and Curio?
Lukasz Magiera: There are multiple things. Lotus is Lotus and Lotus Miner. Lotus itself is the chain node. Lotus Miner is the miner software that storage providers initially ran. Lotus Miner is now in maintenance mode and Curio is replacing it. Lotus itself is not going away. It is still something that is used by everyone interacting with the Filecoin blockchain.
Lotus Miner is the software that initially implemented all of the Filecoin storage services. When we were developing the Lotus node software, we needed software to test it with – if you’re building blockchain software, then you need some software that mines the blocks according to the rules. This was the genesis of Lotus Miner.
Initially it was all just baked into a single process called Lotus. We then separated that functionality into a separate binary and called it Lotus Miner. The initial version, however, had limited scalability. When we got the first testnet up in 2019, several SPs wanted to scale their workloads to more than just a single computer and so we added this hacky mode to it so it could attach external workers. This was back in the days where sealing a single sector took 20 hours and so there was no consideration of data throughput. Plus scheduling wasn't really an issue because everything would take multiple hours.
Over time, the process got much better and much faster and SPs brought way more hardware online and so we had to rewrite it to make it better and more reliable. We didn't have all that much time and the architecture wasn't all that great but we did a rewrite that was reasonable. It could attach workers and workers could transfer data between them. This is what we launched the Filecoin Mainnet with in 2020.
For a while thereafter, most of the effort was making sure the chain didn’t die and so very little of that effort was put towards making Lotus Miner better. At some point, though, we got the idea, “What if we just rewrite Lotus Miner from scratch – just throw everything away and start fresh.” And so this is how Curio came about. The very early designs started about two years ago. Curio launched in May 2024
Can you describe Curio in more detail?
Lukasz Magiera: Curio is a properly scalable, highly available implementation of Filecoin storage provider services. More broadly, it is a powerful scheduler coupled with this runtime that implements all of the Filecoin protocols needed for being a storage provider. The scheduler is really the thing that sets it apart from Lotus Miner.
It is really simple to implement any DePIN protocol where you have some process running on some set of machines where you need to interact with a blockchain. Basically, it lets us implement protocols like Proof of Data Possession (PDP) in a matter of a week or two. Really the first iteration of PDP took me a week to get running – it went from nothing to being able to store data and to properly submit proofs to the chain.
Is Curio then able to incorporate new and composable services for storage providers to provide?
Lukasz Magiera: Yes. The short version is that if you have a cluster, you just put this Curio thing on every machine – you configure it to connect with each other and the rest figures itself out. The storage provider shouldn't really worry about any of the scheduling stuff and any of the things that the software should do by itself.
If something doesn't work, you have proper alerts. You have a nice web UI where it is easy to see where the problems are. And obviously for the architecture, it is much more fault tolerant where you can essentially pull the plug on any machine and the cluster will still keep running and will just keep serving data to clients.
THE STATE OF DECENTRALIZED STORAGE
Where are we with decentralized storage? What are you and your team building near term and then what does it look like long term?
Lukasz Magiera: Where we are now with Filecoin is we have a fairly solid archival product. It could use some more enterprise features for the more enterprise clients – this is mostly about access control lists (ACLs) and the like.
We just shipped Proof of Data Possession and that should help the larger storage provider. Then you have a whole universe of things we can do with confidential computing as well as developing additional proofs – things that enterprise would want such as proof of data scrubbing, i.e. proving there’s been no bit rot. We can also use the confidential compute structure to do essentially full-blown compute on data. We could be the place to allow clients to execute code on their own data. And this could include GPU-type workloads fairly easily as far as I understand.
On the storage provider side right now, you have to be fairly large. You have to have a data center with a lot of racks in a data center to get any scale and make a profit. You need a whole bunch of JBODs (Just a Bunch Of Disks) with at least a petabyte of raw storage or more to make the hardware pay for itself. You need hardware with GPUs and very expensive servers for creating sectors. The process is much better now with Curio, but we can still do better.
The short term plan from Curio is to establish markets for services so that people don't have to have so many GPUs for processing zk-snarks [for the current form of storage proofs]. Instead, let them buy services for zk-snark processing and sector building so they don’t need these very expensive sealing servers. Where all they really need is just hardware and decent Internet access – and for that they maybe don’t even need a data center at all. And so the short term is enabling smaller scale pieces of the protocol.
Can you talk about these changes in the processing flow for storing data?
Lukasz Magiera: Essentially, we have to separate the heavy CapEx processing involved in creating sectors for storage [from the process of storing the data]. You have storage providers who are okay with putting up more money, but they don't want to lock up their rewards. They just want to sell compute.
This kind of provider would just host GPUs and would sell zk-snarks. Or they would host SuperSeal machines and either create hard drives with sectors that they would ship or they would just send those sectors over the Internet.
Separately you have storage providers who want to host data and just have Internet access and hard drives. The experience here should really just resemble normal proof-of-work mining. You just plug boxes into the network and they just work whether it's with creating sectors or offering up hard drives.
Can you talk more about the new proofs that have been released or that you’re thinking about?
Lukasz Magiera: Sure. Proof of Data Possession (PDP) recently shipped and this is where storage providers essentially host clear text data for clients. The cool thing with PDP is that on a wider level, it means that clients can upload much smaller pieces of data to an SP and we can finally build proper data aggregation services where clients can essentially rent storage that can read, write, maybe even modify data on a storage provider. When they want to take a snapshot of their data, they just make a deal and store it and so this simple proof becomes a much better foundation for better storage products.
We could also fairly easily do attestations around proving there has been no bit rot. We could also do attestations for some data retrieval parameters as well as some data access speeds. Attestations for data transfer over the Internet might be possible but they are also very hard. These things could conceivably feed into an automated allocator so that storage deal-making could happen much more easily.
Some proofs appear easy to do but in reality are hard problems to solve. What are your thoughts here?
Lukasz Magiera: Proof of data deletion is one of those that is mathematically impossible to prove but it is much more possible to achieve within secure enclaves. And so if you assume that a) the client can run code, b) that it can trust the enclave, c) that the provider cannot see or interact with that code, and d) that there is a confidential way to interact with the hard drive – then essentially a data owner could rent a part of a hard drive or an SSD through a secure enclave in a way that only they can read and write to that storage.
It wouldn't be perfect. Storage providers could still throttle access or turn off the servers, but in theory, the secure enclave VM would ensure that read and write operations to the drive are secure and it would ensure that the data goes to the client. So it essentially would be possible to build something where the clients would be able to get some level of attestation of data deletion because essentially the SP wouldn't be able to read the data anyways. In this case you literally have to trust hardware, there's no way around it. You have to trust the CPU and you have to trust the firmware on the drive.
And so as Filecoin grows and adds new protocols and new proofs, can these be included in the network much more easily now because of Curio’s architecture?
Lukasz Magiera: Yes, essentially it is just a platform for us to ship capabilities to storage providers very quickly. If there are L2s that require custom software, it's possible then to work with these L2s to ship their runtimes directly inside of Curio. If an SP wants to participate in some L2s network, then they just check another checkbox or maybe put in some lines of configuration and immediately they could start making money without even ever having to install or move around any hardware.
CLOSING THOUGHTS
Any final thoughts or issues you’d like to talk about?
Lukasz Magiera: Yes, I would love to talk about the whole client side of things with Filecoin and other DePIN networks. Curio is really good on the storage provider side of things but I feel like we don’t have a Curio-style project on the client side yet. There are a bunch of clients but I don't think any of them go far enough in rethinking what the client experience could be on Filecoin and DePIN in general.
Most of the clients are still stuck in the more base level ways of interacting with the Filecoin network. You put files into sectors and track them but it feels like we could do much better. Even basic things like support for erasure coding properly would be a pretty big win. Having the ability to have 10% overhead but have some redundancy. All of this coupled with just better, more scalable software.
As a large client, I would just want to put a virtual appliance in my environment and have it be also highly available and be able to talk to Filecoin through a more familiar interface that is much more like object storage.
MetaMe joins Decentralized Storage Alliance
“The DSA is proud to welcome MetaMe into our community of innovators shaping the future of digital sovereignty. Under Dele's visionary leadership, MetaMe is redefining personal data ownership, advancing solutions that empower individuals and uphold the fundamental rights of users worldwide. We look forward to working together to foster open standards, decentralized infrastructure, and a more equitable digital economy.”
Stefaan Vervaet, President, Decentralized Storage Alliance
The Decentralized Storage Alliance (DSA) is thrilled to welcome MetaMe to our global network of pioneering organizations building the future of decentralized infrastructure. MetaMe, founded and led by visionary entrepreneur Dele Atanda, is reshaping how personal data is owned, controlled, and valued in the digital age. In a world where user information is routinely harvested and monetized without true consent, MetaMe offers a powerful alternative: a decentralized, self-sovereign personal data management platform where individuals are in charge of their own digital lives.
At a time when decentralized storage is becoming the foundation for a more secure and equitable internet, MetaMe’s mission perfectly aligns with the DSA’s vision of open standards, user empowerment, and innovation. Together, we are advancing technologies that protect individuals' rights while enabling new forms of ownership and value creation.
Dele Atanda brings a uniquely impressive background to this work. Over the course of his career, he has led major digital transformation initiatives for Fortune 100 companies such as Diageo, BAT, and IBM. He served as an advisor to the United Nations’ AI for Good initiative, authored the influential book The Digitterian Tsunami, and continues to be recognized as a thought leader at the intersection of AI, blockchain, digital rights, and ethical innovation.
Dele’s work is rooted in a deep belief: that technology should serve humanity — not the other way around. His leadership at MetaMe reflects this commitment, building a future where digital freedom and dignity are foundational principles, not afterthoughts.
By joining the DSA, MetaMe strengthens a movement toward a decentralized, user-owned internet where data is secure, portable, and empowering for everyone. We are excited to collaborate with Dele and the MetaMe team as we push the boundaries of what’s possible for personal data sovereignty, decentralized storage, and digital empowerment.
Please join us in welcoming MetaMe to the Decentralized Storage Alliance!
Learn more about their work at metame.com.
The future of storage is decentralized, and we’re just getting started.
Decentralized Media: The Future of Truth in a Trustless Web
'Traditional media is broken. We need to rethink how media works and rebuild this industry from the ground up. I think blockchains and crypto can be tools to create new business models that incentivize quality journalism, with censorship resistance infrastructure and payment rails.’
Camila Russo, Founder, The Defiant.
The DSA and The Defiant teamed up on April 22nd for a lively X Space on Decentralized Media. This X Space brought together voices from The Defiant, Filecoin, CTRL+ X, and Akave to explore topics that included the importance of censorship resistance in journalism, how blockchain and decentralized storage can preserve truth, the role of AI in content creation and distribution, and the vision for a more sovereign internet. The discussion was led by Valeria Kholostenko Strategic Advisor (DSA), Cami Russo (The Defiant), Arikia (CTRL+X), Clara Tsao (Filecoin Foundation), and Daniel Leon (Akave), who envisioned new economic models all under the banner of 'The Death of Traditional Media and the Birth of a New Model.'
When asked on X, Cami Russo responded: 'We’re all just pawns in Google’s land.' That stark reality framed much of the discussion. Journalism, she argued, has become beholden to SEO-chasing clicks rather than truth. 'We need to fundamentally rethink how we operate. Crypto and blockchains must be part of how we rebuild.'
Arikia echoed that sentiment with a journalist’s weary wisdom, 'I’ve seen thousands of articles go offline, irretrievable.' Centralized platforms purge content in redesigns, or lose it through sheer neglect. The promise of blockchain? A world where writers, not platforms, are the custodians of their work.
That promise is no longer theoretical. Just last month, The Defiant officially partnered with Akave and Filecoin to migrate its full archive to decentralized storage. The collaboration is a milestone in media resilience, ensuring that every article past, present, and future remains tamper-proof, censorship-resistant, and permanently accessible.
This move means that even if traditional cloud providers fail, crash, or cave to pressure, The Defiant’s journalism lives on. As Daniel Leon explained, 'Now, even something like an Amazon outage can’t take The Defiant offline.' It’s not just about preservation, it’s about protecting the fourth estate from a malignant future.
This partnership demonstrates how Decentralized Storage can empower media orgs to move from dependency to sovereignty. From now on, The Defiant controls its own archive and more importantly, its own destiny.
Storage as Resistance
Clara Tsao dropped the stats, 'Trust in traditional media is at historic lows, only 31% of U.S. adults, and a paltry 12% of Republicans express confidence in it. The solution, she says, lies in 'verifiable, censorship-resistant systems.' Filecoin, the world’s largest decentralized storage network, has worked to secure everything from the Panama Papers to AI training datasets from CERN.
Daniel Leon brought it home, 'A quarter of articles published between 2013 and 2023 are gone.' With The Defiant now preserved on Filecoin via Akave, that content has a digital afterlife. 'This isn’t just about decentralization for decentralization’s sake. It’s about resilience. It’s about control.'
Micropayments, DAOs, and the New Economics of Journalism
If we’re going to rebuild the media, we need to rethink how we fund it. Russo pitched a decentralized newsroom part DAO, part think tank powered by token incentives. Arikia’s CTRL+X is already building the tools: NFT-based licensing, seamless micropayments, and content ownership that aligns with a journalist’s rights, not the platform’s terms of service.
'The era of ‘just do it for exposure’ is over,'she said. Adding to existing models of publishing content for free, or requiring subscription to an entire service just to read one document, 'Web3 offers an option: pay-per-article access. No more caste system where only the wealthy can read good journalism.'
AI: Frenemy or Fixer?
AI loomed large in the conversation, both as a tool and a threat. 'We’ve had a great experience with AI at The Defiant,' said Russo, noting how it handles routine stories so journalists can focus on deep dives. But she’s wary of the downside. 'If chatbots aren’t attributing their sources, that’s theft. Plain and simple.'
Tsao emphasized AI's reliance on trustworthy data. 'Blockchain lets us verify data provenance. Without that, AI becomes a house of cards.' She cited Filecoin’s role in archiving wartime journalism in Ukraine, proving that authenticity is not just a buzzword but a matter of international justice.
What Comes Next: DJs, Berlin, and Bridging the Gap
The X Space ended with a nod toward the future, namely the upcoming Decentralized Media Summit in Berlin.
Valeria asked: What voices are we still missing?
Cami Russo suggested bringing in traditional media leaders outside the Web3 bubble. Daniel Leon noted the shift happening: 'Two years ago, 'blockchain storage' was a red flag. Now, companies are curious. We’re at an inflection point.'
Arikia fully channeled Berlin energy, 'We need DJs. Musicians have been fighting for decentralization for longer than we have. They’ve got the scars and the stories.' Luckily, the leading culture DAO Refraction has you covered. As part of the broader social venture movement alongside projects like Farcaster, Lens, and others reimagining digital culture, Refraction bridges music, media, and Web3. It’s not just about vibes, it’s about building resilient creative ecosystems that can’t be muted or monetized by middlemen.
The Verdict? Decentralization is the Only Way Through
As institutions crumble and AI rewrites the rules of content creation, one thing is clear: the old model is broken. But from Filecoin’s storage rails to CTRL+X’s monetization tools to The Defiant’s AI-enhanced reporting, the scaffolding of a new system is already here.
As Valeria closed the session, she left us with a challenge, not a conclusion.
'The fight for truth is just beginning.'
DSA updates Data Storage Agreement Template
The DSA today published an update for its Data Storage Agreement template.
Developed by the TA/CA Working Group, the template provides a standard form agreement for data storage, that can be customized to match the specific parameters of a Storage Deal.
The update adds agreed service levels and costs to the template, enabling a single agreement to cover the needs of a wide range of storage customers, including Layer 2 providers both as a Storage provider for a customer and as a customer of specific Storage Providers fulfilling part of their Storage needs.
The template is available as a Google Doc that enables comments, and the DSA welcomes feedback from users of the document that we can use to update the template in future to meet users’ needs even better. To create a new agreement, make a copy of the document.
It is also available for download as a Word Document, that can be edited, converted into other formats and signed by the relevant parties.