Loyal
Do the crime, don't do the time
We must defend our own privacy and freedom if we expect to have any.
Use keyboard to control website
- ↑↓←→navigate
- 123go to section
- WASDscroll
All controls in footer.
02 We're screwed
Closed LLM stacks expose every conversation; Loyal exists because autonomy and privacy still matter.
Closed AI platforms pull every prompt into corporate silos. Loyal exists to show why that surveillance playbook fails builders and communities.
It's so over
They will give out your very thoughts at the first request.
In the last 10 years, Google, Apple, Meta and Microsoft complied with over 10M government requests for your data
"So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,"
Let's win
The answer is an infrastructure that makes intelligence verifiable, private, and aligned with its user. Not a promise, a proof: auditable code, attested runtime, and programmable payments. If an agent reasons or acts on your behalf, you should be able to verify what ran, who got paid, and why without surrendering your data.
You can't audit a black box. Code must be readable, forkable, and reproducible otherwise "trust" is just a word. Deterministic builds and signed releases let anyone match binaries to source. Community review hardens the stack; forks keep vendors honest; permissive licenses ensure the network can't be captured by a single gatekeeper.
03 How it works
Loyal's stack has to serve users at the screen, in the agent runtime, and onchain.
Here is a rough sketch of how the layers hand off work. Renders ship later; for now we trace the flow with placeholders.
Pick a layer to see the role it plays in a Loyal deployment.
The first thing a user does when they open our app is check if MagicBlock RPC is running within a confidential environment. We won't send any requests otherwise. The user gets a PDA linked to their Solana wallet. PDA serves as an anchor for different chats and it also handles payments.
It also allows others to call our service through their smart contracts. Users pay for inference and each payment is split between compute providers, developers on Loyal infrastructure and the protocol. This would allow Solana developers to ship apps without worrying about API keys or monthly billing, that's automatically handled as part of the service.
04 Money
Loyal succeeds when the network stays solvent and builders get paid for aligned agents.
To keep the network running we propose the protocol-side fees, all flowing to the futarchy treasury unless tokenholders vote to move them elsewhere.
Protocol fee levers
- Inference protocol fee collected at settlement on paid inference throughput; sized for upkeep of the core network and security response.
- Marketplace registry fee on agent and tool listings transacted through Loyal’s marketplace, waived during the initial bootstrap window.
- Optional provider bond and attestation verification fee to underwrite third-party attestors; self-attested open-source agents only cover operational cost.
How you're making money
The treasury captures 100% of protocol fees by default and redeploys them where the community will decide.
- Operations funding keeps the shared inference mesh, attestations, and support teams running so builders can stay focused on product.
- Bootstrap programs seed capital and liquidity for new providers or agents that the futarchy market signals as high leverage.
- Liquidity management rewards keep treasuries balanced and ensure trading venues stay deep for agents and tokenholders alike.
05 ICO
We consider futarchy a natural fit for Loyal’s governance – let markets price expected outcomes and let policy follow proof. We plan to launch the ICO on the MetaDAO platform October 18–21.
Learn more about the upcoming ICO in this article: Loyal ICO details.
We want true decentralized governance that works when people disagree. That's why we choose futarchy: instead of tallying votes, it compares prices in two conditional markets for each proposal—if it passes vs. if it fails. If the “pass” market trades higher for a sustained window, the proposal executes; if not, it doesn’t. We prefer this because it gives a clear signal and is harder to game than low-turnout token voting.
Use of funds
All proceeds enter the treasury; nothing sits with the company by default. We'll post a proposal before launch detailing an initial monthly budget.
That budget will cover:
- Compute & infrastructure for the testnet.
- Security & audits. External reviews of enclave code paths, payment rails, capability registry, and agent sandboxing; continuous attestation monitoring.
- Provider incentives. Grants and rebates for early attested node operators and agent/tool publishers.
- R&D & product.
- Ecosystem & community. Hackathons, documentation, and public goods (e.g., reference agents, test suites).
06 FAQ
Quick answers for people shipping with Loyal.
Find quick answers about Loyal’s launch, architecture, and participation requirements.
Toggle a question to see the short answer. Full documentation will follow once the stack is public.
What is Loyal?
Loyal is privacy-preserving, decentralized intelligence infrastructure. We pair trusted compute (hardware-attested TEEs) with a marketplace of AI "service agents," so people and apps can use powerful models without surrendering their data to centralized providers.
What problem are we solving right now?
Your LLM chats generally have no legal protection; providers can log, train on, and leak your data. We're building rails where the runtime, memory, and payments are not owned by a single company, and your data stays sealed inside confidential compute.
Why use TEEs instead of purely cryptographic approaches?
TEEs deliver near-native latency for real-time AI while keeping data encrypted in memory - the practical way to guarantee that node operators cannot see user data today. We still combine this with reproducible builds and other crypto controls.
What are client and service agents?
Your client agent holds preferences, budget, and routing policy. Service agents are specialized models that run inside TEEs and advertise capability vectors into a decentralized registry.
What does Loyal store?
Agents keep short-lived context to process a session, distill insights into encrypted knowledge graphs, and purge raw data on completion.
When will I be able to run production workloads?
Post-launch, after the SDK beta and security audits. Access rolls out in stages following the MetaDAO token event.
How can I contribute today?
Join working groups, review the lightpaper, and help stress-test threat models in community calls (plus Discord/GitHub).
Can operators or even us see your prompts or files?
No. Workloads run in enclaves with memory encryption; attestation plus reproducible builds ensure the code you inspected is the code that runs. We don't have access to your plaintext.
Is Loyal open source?
Yes - we publish code for public audit and reuse. Our manifesto is explicit: we write code, we ship, we publish, worldwide.
07 Manifesto
The internet was designed to be free and open. Free as in freedom.
HTTP is free and open for anyone.
SMTP is free and open for anyone.
AI was free and open for anyone.
We built it on sharing information, research, science -- just to see it being closed off by five marketing startups who are making tools our children will become entirely dependent on. Their decisions already drive most of our fundamental experiences online. Every day we encounter an intelligence telling us things we can and cannot do with our lives.
We have perhaps months, not years, before these dependencies become too strong to break. Soon, these five companies will have access to most working populations globally. They made their fortunes preying on us for the last decade and now they're looking to show our very thoughts.
Do you want this future for our children? When's enough is enough?
We must defend our own cognitive freedom if we expect to have any. We must ensure that each party to an AI interaction has access to intelligence that serves their interests directly. In most cases, centralized control is not necessary. When I query an AI for information, there is no need for a corporation to log my thoughts. When I ask an AI to help me write or reason, the provider need not know what I'm thinking or why; they only need to provide the computational service. When my intelligence is mediated by systems controlled by others, I have no cognitive autonomy. I cannot selectively reveal my mind; I must always submit to their judgment.
Neocypherpunks write code. We know that someone has to write software to defend privacy, and since we can't get privacy unless we all do, we're going to write it. We publish our code so that our fellow Neocypherpunks may practice and play with it. Our code is free for all to use, worldwide. We don't much care if you don't approve of the software we write. We know that software can't be destroyed and that a widely dispersed system can't be shut down.
For privacy to be widespread it must be part of a social contract. People must come together and deploy these systems for the common good. Privacy only extends so far as the cooperation of one's fellows in society. We the Neocypherpunks seek your questions and your concerns and hope we may engage you so that we do not deceive ourselves. We will not, however, be moved out of our course because some may disagree with our goals.
The Neocypherpunks are actively engaged in making the networks safer for privacy. Let us proceed together apace.
Onward.
Christopher Cherniakov <chris@askloyal.com>
9 September 2025