Artanis #4: Is this self-sabotage?
We're pivoting to launch a SaaS tool for non-technical founders to build effective AI without engineers.
Artanis: helping businesses build AI that actually works
π Ways you can help: connecting with a new customer profile π
We have a new ideal customer profile (ICP):
1. Non-technical founder building an AI product in a domain where they have expertise
2. Their startup is small, 1-5 people
3. They've raised little or no funding, so hiring engineers is a big risk
Let us know if you can connect us with anyone who may fit the new criteria.
π Progress in July - getting to 5 customers π
A reminder that our focus is market risk: will startups pay us to build their AI? Our milestone for de-risking this is 10 customers.
Our goal for July was to grow from 4 to 5 customers. A bright spot is that our second customer signed a 6-month extension following the pilot. We also hit our target, but we let our ICP fray by taking a customer whose first project doesn't involve a heavy AI component.
Primary metric: 5 customers (+1 since June)
Monthly revenue (secondary): Β£20.5k (-Β£2.5k since June)
Monthly cost (secondary): 136 hours building AI for clients (+15 since June)
Our "cost" metric is a concern. We're a team of 2, and we learned we're not going to be able to serve 10 customers while remaining purely a service. It's just not scalable enough. We don't want to hire, as our mission isn't to scale a consultancy, so we're going to be changing course in August!
π‘ Critical insight - our users are domain experts, not engineers π‘
In all our projects, the main stakeholder has been the domain expert (i.e. the person closest to the data) rather than the CTO. For example: when building an AI tutor for marking English essays, our main stakeholder is the human tutor currently marking those essays. Our value is encoding the knowledge of a domain expert into an AI system that reliably does their job.
We've learned that domain experts with the right tech are more likely to build good AI systems than engineers. Therefore, the user of Artanis should be the domain expert. It's easier to launch new products when the customer is the user, so we're changing our ICP to align with this.
π΅οΈ Challenges - changing course π΅οΈ
We're making two main changes in response to issues with our ICP and scalability:
ICP change: our old ICP was the founder of an AI startup that has software engineers but no AI specialists. This has been a tough sell: they often see the quality of their tech as their IP, so want to build it with an internal team. However, non-technical founders often see their domain expertise as their IP, rather than their tech. This is a much more natural fit for us.
Product Launch: we'll struggle to reach 10 customers operating purely as a service, due to lack of scalability, so we're launching a SaaS tool. Artanis will help non-technical founders with domain expertise build AI products without needing to hire engineers. Our SaaS product will perform three main functions for the domain expert:
Encode their domain knowledge into an AI product that works reliably.
Deploy and host the AI for them so that other people can use their product.
Allow them to update their product in response to requests from their users.
You may question why we're changing tack when the services business has started well. Sometimes we too wonder "Is this just self-sabotage?" Perhaps entrepreneurship often requires a degree of masochism...
π― Goal for August - 1 customer for our new SaaS product π―
We're changing our goals to focus on the new product. The main goal will be to sign up 1 customer for that, rather than a custom project. If we do that, and avoid churn, we'll grow from 5 to 6 customers. To avoid confusion, we'll separate our SaaS metrics from our consulting work.
π¬ How to build AI that actually works: new chapterπ¬
We've written a second post in our playbook on building AI, covering the need for performance evaluation. A common mistake is building a model, seeing it works "most of the time", then launching. When it's time to iterate, people don't know if they're making things better or worse and end up feeling like they're playing whack-a-mole.
A better approach starts with defining "success" by building a labelled dataset, before spending any time building a model. You can't reliably improve your model unless you put the time into building this dataset. We acknowledge that defining success can be tricky with language model outputs, so we provide an example of how to do that too.
π Shout-outs π
Big thanks to the following for responding to our previous call for help!
Shingai A - for a fun chat about crazy stuff we've seen in AI startups
Christine F - for the intro to Oli
Matt C - for a few good referrals off the back of a fairly random meeting
Walid BM - for giving us a strong customer reference
Myles G - for the links into Elevate
Ryo & Olly - for helping calm our nerves about self-sabotage!
Thanks,
Sam & Yousef