Weekly API Evangelist Governance (Guidance)
A startup called OpenRouter has come up in my market research and during ongoing discussion enough lately that I found myself thinking about it throughout the weekend—so what better way to work through my thoughts and maximize my time and energy, than write up a summary as the weekly newsletter for API Evangelist.
I want to understand what OpenRouter is, and how it fits into the wider technology landscape, but also speaks to what we are building with Naftiko—not just what it does in the context of AI. Let’s start with how OpenRouter describes themselves on their website with their tag line.
“The unified interface for LLMs - Better prices, better uptime, no subscription!”
Behind the tagline, OpenRouter is an API. To be more precise it is an API marketplace with an API. You integrate with OpenRouter via an API, and for the most part, OpenRouter integrates with models and providers, the top two resources available via the platform, via APIs. In my experience, it is if RapidAPI and Algorithmia had a baby, but designed to ride this current wave of LLM hype and investment, providing what they and their customers need to understand in this moment.

Reasons To Use OpenRouter
So why do you want to use OpenRouter. Like the tagline above I prefer to start with their own language. These are the main reasons that OpenRouter states why you should be using their platform and API as part of your operations.
Pricing
Performance
Standardized API
Real-World Insights
Consolidated Billing
Higher Availability
Higher Rate Limits
All very clear reasons. All focused on the LLM conversation. It speaks to many of the pain points folks clearly are having working across multiple models across multiple providers. There are a lot of devils in those details, but I think we’ll get to that in another post.

Core Resources
There are other resources being offered by OpenRouter, but the top two are (Large Language) models and providers of those models. You work with these resources through APIs, and the primary way that OpenRouter works with these models and providers is via APIs—they are just brokering your usage of these models across these providers.
Models (Large Language)
Providers (Services)
When you focus in on these to resources, it is easier to see OpenRouter as simply an API catalog, registry, or network. OpenRouter definitely offers other resources beyond models and providers, but mostly they are about providing access to models and providers APIs, via a unified, common, or aggregate API for models and providers.

Multimodal
OpenRouter helps you broker the states between different modes of leverage large language models. There are four distinct modes in which OpenRouter is helping you switch between, which helps you navigate the top uses in which your are likely using large language models to accomplish a variety of tasks you need for your business.
Text
Images
Audio
PDF
Similar to discovery below, this is another area for opportunity when it comes to a semantic layer. There is also a huge opportunity to look and mix modes across these areas, but also multi / mix mode across other types of resources that aren’t LLM, like good old fashioned machine learning and APIs that can be used to augment large language models, properly using all of the media types we have available to us when using HTTP.

Routing
As the name says, OpenRouter is about routing you to the optimal model you need to get the job you need done. This is where the API catalog, marketplace, or network comparison I used begins to evolve for me. OpenRouter still brokers the API calls like a RapidAPI or other would do, but OpenRouter will actually intelligently route your API calls in useful ways.
Model Routing
Provider Routing
This is where the LLM world I think will begin to break new ground for older API concepts and practices. As I will talk about below, this is where routing will break free of routing concepts long held by engineering infrastructure, and begin to enable routing that is more aligned with business strategy rather than the technical needs of engineering.

Input/Output (IO)
Now we move into another area I feel like isn’t new, but where the LLM work will continue to push old concepts forward. I see prompts, presents, and completions just as I would any path, parameter, or request body for an API call, and I see the structured outputs as simply an API response. However, there are some new and interesting opportunities within this new vocabulary.
Prompts
Prompt Caching
Presets
Completions
Transformations
Structured Outputs
There is a lot of opportunity for creativity and useful implementations at the input and output layer across many providers and their models. There is also a lot of opportunity for creativity and useful implementations at the input and output layer across many SaaS providers, data and file connections—even more when you think about these things holistically.

Discovery
When it comes to being able to organize and discover your providers and models, but also the other resources OpenRouter makes available—there are two main ways in which you can approach this. Providing you a minimal set of tools for shaping how your models and providers are organized, which mostly is about enabling the usage of OpenRouter, but not the shape of your organization and discovery.
Metadata
Categories
Ideally we’d see more tagging and semantics present in here. As I will talk about later, most of the things you need to organize your models, providers, prompts, and other things are captured at the OpenRouter level in the clouds, and are really about letting you get your house in order, but can still help you minimally organize your model realm in their cloud.

Integrations
I am going to throw all the realities of integration that OpenRouter works with into a single bucket. The integrations with models are a first-class resources, but OpenRouter uses the language used by the wider AI industry to organize integrations into two separate buckets for calling tools and web search to get at everything else you might need.
Tool Calling
Web Search
This is where I diverge from everyone who works in the AI space. They see models and providers of those models, as well as a bunch of other tools you can use—and oh yeah, you can search the web for everything else. I just see APIs. Most of the tools are APIs, or should be APIs. The web is an API. This is the underpinning of my argument that OpenRouter is the next iteration of API catalogs, marketplaces, and networks. It is an interesting one, but just an incremental step.

Keys
OpenRouter supports both OpenRouter credits and the option to bring your own provider keys (BYOK), and allows you to create, read, update, and delete the keys you use to access OpenRouter. It is an interesting overlay over API providers, leveraging OpenRouters keys via credits, or bring your own keys. All while abstracting away this key layer with your own key layers for your teams using OpenRouter. This begins to bend the reality of what is possible across wider SaaS services beyond the model providers in which OpenRouter has assembled.

Quality of Service
OpenRouter provides a pretty impressive list of features that address the overall quality of service available across all of these large language model (LLM) providers. It appears to be hitting on all the notes teams are facing when they are trying to move models into production and scale their usage across however they are using LLMs in their business.
Availability
Latency
Performance
Uptime Optimization
Zero Completion Insurance
Rate Limits
Failover
Capacity
Switching
It is important to note that this list approaches the quality of service across many different LLM providers via OpenRouter’s platform. Many of these quality of service realities will shift or go away if you are running locally or on-premise, as well as how they look across different domains and the applications in which the models are powering.

Money
Now we get to the business side of things and helping us manage the money spread across all of these models we are using. I am guessing this is where a lot of businesses are hoping to get the help of OpenRouter. Helping them to abstract away the complexity of using many different large language models across many different providers with a variety of costs associated in these ways.
Prices
Credits
Spend
Activity
Billing
There is a huge opportunity to help businesses optimize at this layer. There is an even bigger opportunity available in being the brokering of data and insights at this layer. This is where you get into the economics of how this works or doesn’t work, which is important across the models in use, but it is even more important across all APIs—not just AI APIs.

Connecting the Dots
Having your finger on the pulse of the money at the model layer across providers is valuable, but also understanding how companies are using or not using models, and what the adoption rate is across providers, is significantly more valuable. It is interesting how OpenRouter is gamifying it. Driving insights, publishing rankings, and understanding market share is a good way to accumulate power, and the app attribution approach they take is an interesting way to crowdsource this power.
Insights
Rankings
Market Share
App Attribution
This is where the OpenRouter implementation gets the most interesting for me. While they are limiting themselves focusing on just large language model APIs, I’ve long advocated for and worked towards industry and economic level insights, rankings, and market data—OpenRouter is the closest attempt I’ve seen an API marketplace get towards these earlier visions.

Compliance
Lastly, I have to highlight some of the compliance and regulatory benefits. These are important. The layer that OpenRouter creates in-between companies and the large language models they use is an important space. There is a lot of potential value created here. There is also a lot of potential for exploitation, which is why it is good to see these areas being invested in by OpenRouter.
Privacy
Logging
Zero Data Retention (ZDR)
GDPR
SOC-2
SLAs
As with other areas of this analysis, these areas will shift when in the cloud, on-premise, or running locally. Also I think a balance must be struck between where this layer on top of our large language models APIs, and other APIs, data connections and file stores, as well as how this layer reports back to the on-premise or cloud logging or other data retention practices. I think OpenRouter’s adoption of Zero Data Retention (ZDR) is significant in this area.

A New Class of API in the Toolbox
I see model APIs as simply a new class of API in our divers API toolbox. I think that OpenRouter has innovated on the concept of an API marketplace with the routing and input and output layer—with an emphasis on this new class of API. I am not a fan of how they demote all other classes of APIs alongside database, file, and other connectors. But I see this regularly from other players. OpenRouter approach to helping folks manage the money associated with our model usage and connecting the dots with ranking and insights is compelling.
As the AI bubble continues to shrink, I feel that large language models will shrink in size to many much smaller language models. They will also find their place alongside other existing tools in our diverse API toolbox, such as REST, GraphQL, WebSockets, Webhooks, Events, gRPC, machine learning, and other classes of APIs. This is when using OpenRouter is likely to feel a little more constrained as the priorities across different classes of APIs continue to shift as they have for decades.
“Be the change that you wish to see in the world.” ― Mahatma Gandhi