Advertisement
AD

$50 Million Fundraising, ChatGPT Competition and Decentralized AI: Major Interview With Gonka Founders David Liberman and Daniil Liberman

Thu, 29/01/2026 - 8:48
Here's how Gonka is going to introduce decentralization concepts to the LLMs segment, and what role GNK token plays in this process.
Advertisement
$50 Million Fundraising, ChatGPT Competition and Decentralized AI: Major Interview With Gonka Founders David Liberman and Daniil Liberman
Cover image via U.Today

Disclaimer: The opinions expressed by our writers are their own and do not represent the views of U.Today. The financial and market information provided on U.Today is intended for informational purposes only. U.Today is not liable for any financial losses incurred while trading cryptocurrencies. Conduct your own research by contacting financial experts before making any investment decisions. We believe that all content is accurate as of the date of publication, but certain offers mentioned may no longer be available.

Read U.TODAY on
Google News

While the mainstream usage of AI tools like Claude or ChatGPT is rocketing, its business opportunities are yet to be explored. Also, being centralized businesses, they might be vulnerable to manipulations, attacks and censorship.

Advertisement

The decentralized AI resource segment is still in its infancy. Led by David Liberman and Daniil Liberman, Gonka is one of the first big names in this segment. In this interview with U.Today, the AI innovators discuss the challenges and promises of decentralized AI networks, funding, tokenization, business ambitions and much more.

U.Today: Hi David and Daniil, thank you for coming. Please, tell us a little bit about your project and background in AI.

Gonka founders: Thank you for having us – we’re glad to share our story and talk about the ideas behind Gonka.

Advertisement

We’ve been building technology together for most of our lives. Our early work started in distributed computation, computer graphics, and game development, where we learned how to push hardware to its limits and think in terms of performance, efficiency, and scale. Over time, that experience naturally evolved into work on AI-driven products, augmented reality, and large-scale systems.

When Snapchat acquired us back in 2016, we worked on AR products used by hundreds of millions of people. Later, through Product Science, we focused on applied machine learning and performance optimization for large production systems. That gave us hands-on experience with how modern AI infrastructure actually operates – not in theory, but under real-world constraints.

That perspective ultimately led us to Gonka.

Advertisement

Gonka was born from a collaboration between web2 AI researchers and web3 infrastructure operators. In considering how AI infrastructure can scale globally, we drew inspiration from Bitcoin – not as a financial asset, but as a blueprint for coordinating massive, decentralized infrastructure through open, work-based incentives. While much of the industry has moved away from Proof-of-Work, our experience has shown that when it comes to compute and hardware innovation, work-based systems can be highly effective.

We saw that AI researchers and infrastructure operators bring complementary strengths that rarely come together. Researchers understand how intelligence can transform industries, while infrastructure operators excel at deploying data centers quickly and optimizing hardware through real economic incentives. Gonka brings these worlds together through a compute-first approach to AI infrastructure.

Development began in May 2024, and by the end of the year, the first nodes were already communicating on the Gonka testnet. The mainnet launched at the end of August 2025, marking the point where the network opened up for broader participation. In the months following the mainnet launch, Gonka aggregated the equivalent of over 10,000 H100 GPUs, with growing participation from GPU operators and AI builders worldwide. 

Gonka treats compute as open infrastructure – verifiable, efficient, and built around real contribution.

U.T.: Broadly, would you describe Gonka as an AI or a Web3 protocol?

G.: We’d describe Gonka first and foremost as a decentralized AI compute protocol.

The problem we’re addressing is an AI-native one: how compute for AI inference is produced, allocated, and governed. Today, that layer is highly centralized and controlled by a small number of providers, which shapes what can be built, who gets access, and at what cost. Gonka was designed to change that by treating compute as open infrastructure rather than as a gated service.

Decentralization plays a supporting role here. Like Bitcoin once demonstrated for hardware coordination, decentralized incentives can be a powerful way to scale real-world infrastructure efficiently. We use Web3 primitives as tools, not as the product itself.

From a developer’s perspective, Gonka feels like AI infrastructure. Builders interact with the network through familiar OpenAI-style APIs and inference workloads without needing to think about blockchains. From a protocol perspective, decentralization allows the network to verify real computational contribution and govern itself without a central owner.

So while Gonka uses Web3 mechanisms at the infrastructure level, its purpose is clearly AI-native.

U.T.: The segment of AI is rocketing, and new concepts are popping up here and there. What makes Gonka special in this highly competitive landscape?

G.: What sets Gonka apart is that we’re not trying to build another AI product. We’re focused on a deeper layer: the economics and infrastructure of AI compute itself.

Most new AI projects compete on models, features, or user interfaces. Gonka operates one layer below that. We ask more fundamental questions: how is an AI compute produced, who controls it, and what incentives shape its evolution? 

If we compare Gonka with other decentralised projects, there are two major  differences. Firstly, Gonka aligns incentives around real work using a proof-of-work consensus mechanism, while other networks use Proof-of-Stake, incentivising stakers (capital). In Gonka participants earn by contributing verified compute, not by financial engineering or early access. Secondly, nearly all computational resources in the network are directed toward meaningful AI tasks instead of being consumed by security overhead. 

Another key difference is governance. Gonka is designed so that the people who run the infrastructure also govern it. There’s no single owner deciding pricing, access, or direction. Over time, that makes the network more resilient and more closely aligned with the people who actually depend on it.

In conclusion, Gonka’s focus is intentionally narrow but foundational. We’re not competing to be the smartest model (but the smartest model can be developed using Gonka compute). We’re building the infrastructure that allows many different models – and many different ideas – to exist without being bottlenecked by centralized control.

U.T.: One of the main constructs of the Gonka narrative is "decentralized AI network." But why does AI need something decentralized, while over 99% of usage is covered by corporate-backed products like Perplexity and ChatGPT?

G.: The fact that most AI usage today is centralized doesn’t mean the model works – it means it’s the only one currently available.

In practice, access to advanced GPUs is highly concentrated, with a small number of hardware manufacturers and hyperscale cloud providers effectively determining who can build, where, and at what cost. Nvidia GPUs, for example, sit at the center of the AI stack, and access to them is increasingly shaped by long-term contracts, regional restrictions, and geopolitical considerations.

This concentration isn’t just a technical problem – it’s an economic and sovereignty issue. Compute access is becoming geographically constrained, with the U.S. and China competing to secure energy, data centers, and advanced chips. That dynamic risks placing large parts of the world in a structurally dependent position, limiting their ability to compete, innovate, or build sustainable AI economies.

At the same time, many existing decentralized networks fail oppositely. They burn a significant share of available GPU power on internal consensus and security mechanisms, while rewarding capital rather than actual computational contribution. Both problems discourage hardware providers and slow down real infrastructure innovation.

Decentralization becomes necessary when scale exposes these limits. Systems like Gonka are designed to align participation and influence with verified computational contribution, allowing compute resources to be used productively and enabling smaller, independent GPU operators to pool resources, compete on cost and efficiency, and reduce reliance on a handful of dominant providers. 

If AI is becoming foundational infrastructure – similar to electricity in the industrial era or the internet in its early days – then access to compute cannot be controlled by a few gatekeepers who unilaterally set prices and rules. Centralized AI products will continue to exist, but long-term resilience requires an alternative model. 

U.T.: Also, the concept of synergy between AI and Web3 seems to be overused now. What's your point here - how will these two segments interact with each other, and what are the potential use cases here?

G.: We agree that “AI and Web3 synergy” is often discussed in very abstract terms. Our view is much more practical: the interaction happens at the infrastructure and incentive layers, not at the level of slogans or features.

AI needs massive amounts of compute, while Web3 provides mechanisms for coordinating resources and incentives across many independent participants without relying on a single owner. Gonka sits at that intersection by using decentralized coordination to make large-scale AI compute accessible and verifiable. 

In practice, Web3 provides the coordination and verification layer–ensuring that compute contributions are real, measurable, and fairly rewarded. AI provides the workloads that give this infrastructure real-world purpose.

The most immediate use cases are AI systems that benefit from openness and verifiability. This includes on-chain or semi-on-chain AI agents, applications that require transparent inference, and systems where users need stronger guarantees about how AI outputs are produced. It also enables AI builders to run inference without being locked into a single centralized provider or API.

So for us, the synergy isn’t about putting AI on-chain or adding tokens to models. It’s about using Web3 mechanisms to build open, scalable AI infrastructure – and using AI workloads to give decentralized networks real economic utility.

We’ve seen this pattern before. Bitcoin showed how aligned incentives can give rise to massive, globally distributed compute infrastructure without centralized coordination. We see AI as the next step in that evolution – directing decentralized compute toward real-world intelligence rather than abstract security work.

U.T.: Could you call the current status of AI progress and adoption a bubble - and why?

G.: We wouldn’t call AI itself a bubble, but parts of the market around it definitely are.

What we’re seeing today is a familiar pattern that comes with every foundational technology. There is a real breakthrough – AI systems that are genuinely useful and widely adopted – and on top of that, a speculative layer where expectations grow faster than infrastructure and economics can support them.

That’s where the “bubble” narrative comes from. Not from AI’s capabilities, but from the assumption that scaling AI is cheap, frictionless, and infinitely available. In reality, AI progress is increasingly constrained by infrastructure. Compute is expensive, concentrated, and limited, even as demand continues to grow.

Historically, speculative excess tends to form when capital focuses on the most visible layer of innovation and underestimates the cost of what sits underneath. The speculative layer may correct, but the infrastructure layer keeps expanding–and that’s where long-term value is built.

So if there is a bubble today, it’s not in AI as a technology. It’s in the belief that AI can scale without rethinking how compute is built, owned, and governed. That gap is exactly what systems like Gonka are designed to address.

U.T.: What real-world implementations - in both B2B and B2C systems - do you see for Gonka?

G.: As mentioned, we see Gonka primarily as infrastructure, so its impact appears wherever AI inference is already critical but constrained by cost, access, or control.

On the B2B side, the most immediate use cases are inference-heavy systems such as AI agents, internal copilots, customer support automation, and data analysis pipelines. For many teams, the bottleneck today isn’t model quality, but pricing volatility, capacity limits, and lack of transparency from centralized providers. Gonka enables these workloads to run on open infrastructure, where access and cost are driven by real compute rather than vendor lock-in.

On the B2C and supply side, participation in Gonka is intentionally flexible. Hosts can contribute GPU compute independently – running their own infrastructure and earning directly based on verified computational work – or they can choose to join pools that aggregate resources from multiple smaller participants.

Pooling is designed as an option, not a requirement. It lowers the barrier to entry for Hosts with limited resources, while independent operators can participate at full scale on their own terms. Together, these models allow the network to combine large operators and smaller contributors into a single, scalable compute layer.

This flexibility increases available inference capacity for consumer-facing AI applications without relying on a single centralized provider, while keeping participation open to a wide range of infrastructure operators.

U.T.: What will be the use of GNK token? Who is its potential audience?

G.: GNK is the native utility token of the Gonka network, primarily used to reward compute providers (Hosts) for verified computational contribution and to pay for AI compute on the network. It is designed to support real AI workloads, aligning incentives around actual performance rather than speculation.

For AI builders and developers, GNK provides access to decentralized AI inference on open infrastructure. More broadly, it also allows supporters of the network to contribute to the idea of AI abundance — where compute is coordinated efficiently and made more accessible, rather than controlled by a small number of centralized providers.

U.T.: Could you please indicate three milestones of Gonka and GNK you are proud of so far?

G.: One milestone we’re proud of is the $50 million strategic investment from Bitfury. Beyond capital, it validated that decentralized, high-efficiency AI compute can be built and operated in practice, drawing on Bitfury’s deep experience in large-scale infrastructure.

The second milestone is early network scale. Within the first three months, Gonka aggregated the equivalent of over 12,000 H100 GPUs, demonstrating strong demand from Hosts for a system where rewards are tied to verified computational contribution rather than speculation.

The third milestone is the growth of an active, infrastructure-driven community of GPU Hosts, AI builders, and researchers. This includes early contributors and advisors such as 6block, Hard Yaka, Gcore, Hyperfusion, Greg Kidd, and Val Vavilov, alongside sustained technical and media engagement. The community has continued to grow organically, with over 13,000 participants actively exchanging insights around infrastructure, performance, and real-world AI workloads. 

Much of this discussion happens openly in Gonka’s Discord, where GPU operators, builders, and researchers collaborate directly around the network’s development.

U.T.: Could you share some hardware specifications of the machines Gonka is running on?

G.: Gonka runs on a heterogeneous set of high-performance machines contributed by independent Hosts across the network. The protocol is designed for modern, data-center-grade GPU infrastructure optimized for AI inference.

In practice, this includes multi-GPU servers with H100- and A100-equivalent accelerators, enterprise CPUs, high-bandwidth memory, and fast interconnects. Exact configurations vary by Host, but all machines must meet performance thresholds suitable for large-scale inference workloads.

The network also supports pooling, allowing multiple Hosts to combine resources efficiently. Rather than enforcing rigid hardware standards, Gonka focuses on overall compute capability and reliability, enabling the network to scale by aggregating high-quality infrastructure from diverse operators.

U.T.: Trick question: Will AI startups ever be able to achieve viable unit economics?

G.: Yes – but not by default, and not all of them.

A big part of the current “unit economics problem” comes from compute economics: pricing opacity, capacity constraints, and the fact that inference costs scale with usage. As more AI products move from demos to always-on experiences, infrastructure becomes the limiting factor.

Two recent data points illustrate the direction clearly:

  • Inference is becoming the dominant workload. Deloitte’s 2025 analysis forecasts that inference will account for about two-thirds of all compute by 2026, up from roughly half in 2025 (and about a third in 2023). That shift matters because inference is where unit economics are won or lost. 
  • Centralization keeps compute expensive and capacity-limited. In 2025, Nvidia disclosed that two customers accounted for 39% of quarterly revenue–a striking signal of how concentrated “who gets the most advanced compute” can be at the top of the market. 

So viable unit economics will increasingly depend on whether startups can access compute in a way that’s predictable, scalable, and cost-efficient. Startups that treat compute as an afterthought will struggle as usage grows. Those that build around better infrastructure economics–higher utilization, transparent pricing, and more resilient access models–can absolutely reach sustainable unit economics.

U.T.: What's your general prediction regarding the capitalization of the AI segment in five years - and Gonka's place here?

G.: Trying to predict exact capitalization figures five years out misses the more important point – the speed and non-linearity of AI adoption once conditions align.

We’re already seeing how quickly AI can move from “optional” to “default” once infrastructure and distribution fall into place. In 2025, Microsoft disclosed that GitHub Copilot reached around 20 million users, with roughly 90% of Fortune 100 companies using it in production. That kind of adoption would have been difficult to predict just a few years earlier, and it illustrates how fast AI usage can scale once it becomes embedded into real workflows.

As this shift accelerates, the bottleneck moves away from models themselves and toward access to reliable, affordable, always-on compute. Demand is growing faster than centralized infrastructure can comfortably absorb, which is why access, pricing stability, and capacity availability are becoming strategic constraints for both startups and large organizations. 

That’s where we see Gonka’s place. We’re not building toward a fixed roadmap or targeting a specific slice of AI market capitalization. Gonka is intentionally community-driven and evolves based on real demand for compute. Our view is that open, verifiable compute networks will become a critical layer of the AI economy – not replacing centralized providers, but constraining their power and expanding access globally.

If and when Gonka succeeds, it won’t be because it predicted the future more accurately than others. It will be because it was built like infrastructure – able to adapt, scale, and support AI adoption even when growth becomes faster and more uneven than traditional models expect.

Advertisement
Advertisement
Advertisement
Advertisement
Subscribe to daily newsletter

Recommended articles

Our social media
There's a lot to see there, too