Early Look: The World's First Inference-Optimized Neocloud
Hi Team,
We’re excited to share an exclusive early look at General Compute: the world's first inference-optimized neocloud. Based in Paraguay, their data centers are utilizing Cerebras chips to solve the AI inference bottleneck, rather than relying on GPUs built for training.
We’re excited about the founder, Finn Puklowski (we previously had him on the Techmates podcast), and have been following his journey building and scaling his previous venture to $40M ARR. We are currently putting together an SPV for our network to back their Seed round, led by Village Global.
We are bullish on this deal because General Compute is building a massive structural moat, specifically driven by:
Hardware Advantage: An exclusive partnership with Cerebras delivering up to 20x faster inference.
Energy Arbitrage: Secured access to 500MW of hydroelectric power in Paraguay at an ultra-low 3.3¢ per kWh.
Proven Leadership: A pioneering founder combining hyper-growth tech scaling with deep LatAm experience.
Here is a breakdown of the opportunity:
The Founder & The NZ Connection Finn Puklowski (CEO) is a top-tier founder with strong ties to our New Zealand network. He is a proven tech entrepreneur who previously founded Fluency Academy, scaling it to over $40M ARR and selling a 20% stake for $50M. Fluency was a massive success that amassed 30M social media followers. Finn brings a unique, highly relevant blend of hyper-growth tech scaling and 10+ years of physical property development experience. He is joined by Jason Goodison (CTO), an ex-Microsoft engineer, YC Alum, and engineering YouTuber.
The Opportunity: The Inference Neocloud vs. CoreWeave The macro opportunity in AI infrastructure is extraordinary, with the GPUaaS market projected to grow toward $50B by 2032. However, existing neoclouds like CoreWeave, Lambda, and Crusoe are entirely over-indexed to general-purpose GPUs. While GPUs are great for training, AI compute demand is rapidly shifting toward inference, which is projected to become the majority of workloads by 2027.
General Compute is stepping in to build an ASIC-powered Neocloud specifically built for latency-sensitive applications. By partnering with Cerebras, they are able to deliver inference speeds up to 20x faster than traditional setups, establishing a massive performance moat for agentic workflows and voice AI.
The Unfair Advantage: Traction & Execution The team is moving aggressively to lock in their infrastructure advantages:
Energy Arbitrage: By utilizing the Itaipu Dam's hydroelectric surplus, they successfully lobbied for a new data center tariff. This secures energy at roughly 70% cheaper than US high-scale deals, with 500MW immediately available.
Hardware Lock-in: They have already secured a compute contract with Cerebras, gaining exclusive API access and serving as a design partner for voice and diffusion models.
US Co-location: They are maintaining a US co-location strategy specifically for latency-critical applications requiring absolute minimal time-to-first-token (TTFT).
Deployment Heavyweights: They have surrounded themselves with veterans to execute the build-out, including Jeff Ferry (who founded and ran Goldman Sachs' digital infrastructure team for 20 years).
Momentum & Next Steps Village Global was the first check in and will lead the Seed round. We are currently gathering early interest for the SPV and will communicate terms once they become clear.
Please let us know if you're interested so we can keep you in the loop. Allocations will be on a first-come, first-served basis. Happy to share a deck and more information.