11.19.2024

Apply to Betaworks’ AI Camp: App Layer for $500k in funding

Jordan Crook

TL;DR (but you should read the whole thing): Camp is an in-residence investment & mentorship program for startups building with frontier technologies. For this upcoming cohort — based on recent developments in tech, interfaces, and the market — we’ve refined our previous thesis around AI Applications and are doubling down. We’re seeking teams that are harnessing rapidly improving foundation layer capabilities with a custom architecture; building products that deliver value quickly to a well-defined user profile; with a special interest in applications that touch the workflows and productivity of individuals, SMBs, and antiquated vertical industries. If that describes you, apply here.

Evolution of a Theme

Compared to previous shifts in technology, AI builders have been spoiled as of late. Spoiled with speed. The speed of progress in the last several years has brought us model after model, each better than the last, and new and powerful techniques to harness them. Massively capitalized and talent-rich foundation model companies are knife fighting, pushing each other to keep leaping forward.

This has created the marketplace dynamics that we have been expecting and hoping for, where value is rapidly being pushed up the stack to the application layer (the fun, exciting layer where everyone gets to do new magical things). That’s why we just invested in nine companies (at $500k each) building native AI at the application layer. You can check out those investments here.

And in 2025, we’re doing it again.

In the process of doing AI Camp: Native Applications, and seeing so many incredible companies in the space, we have witnessed the increasing capabilities of the foundational technologies of the AI ecosystem drive the creation of radically new products. We believe the technology continues to evolve and that opportunity is large enough to stand up another camp focused on AI at the application layer, with a refinement of our thesis.

Our first camp was built on the belief that the ecosystem writ large — a competitive, and fast-advancing foundation layer, and an evolving middleware/tooling layer — would pave the way for a flourishing and innovative application layer of products and services.

What we learned in the last six months is that there is unprecedented capacity in the market for new AI enabled products and services, especially given the advancement in reasoning capabilities (System 2 thinking) of the models, increasing context windows, and growing demand for AI (and agentic) software. We’ve gotten an even better sense for what has the greatest chance of venture-scale success and want to double down on our thesis.

Our Observations, and Opinions

Technology Changes

During the last camp (fall 2024), OpenAI had just released the o1 model, aka Strawberry. It was close to what we expected, yet it set an interesting and high bar that merited some recalibration of prior thinking. Also, based on the historical precedent, it’s not unrealistic to believe that the other major players are cooking up and shipping similar tech, and that the open side of the AI ecosystem will get close to parity in a short amount of time. So, what we are scrambling for API access to and experimenting with now will become ubiquitous in the near future.

So why do we care (beyond jealousy) that it’s now better at math than us?

This is one of the best examples of System 2 thinking that we’ve seen from an LLM system to date.

Without these newer system 2-like structures built around them, a vanilla model is just a pile of weights, purely reactive without the capacity to be reflective, making it much less usable or valuable in work that requires multi-step instructions (such as moderately complex arithmetic), conditional logic and iterative loops, sustained reasoning, deductive reasoning, or contextual continuity.

To anthropomorphize a bit, we’re seeing the ability to be reflective, the way a human would be in determining whether to make an investment or hire a candidate, pulling multiple bits of context together, in the right order, and analyzing each bit individually and as an interconnected whole of bits. And this increased our conviction around the application layer.

Vastly improving techniques like chain of thought (CoT) prompting and self-consistency sampling are a big part of the reason for this. It amounts to something of a decisive shift in the way we increase the aggregate intelligence capabilities of AI systems. The industry continues to get better and better at RL-ing models, making architecture improvements, and of course throwing more tokens and parameters at the problem. But what has been shown perhaps definitively in recent months is that throwing test time compute (including the growing ability to utilize raw context at increasingly mind-boggling length) at the problem creates another axis of techniques (and cost, and investment) to push forward on, and results in a force multiplier for everything that we care about.

The foundation of our camp program is always an emergent technology that unlocks potential for new venture-scale companies. Reasoning — system 2 thinking — is a massive unlock.

Interface/Positioning Changes

One of our core learnings during the last camp was that AI can power software that overflows the boundaries of a tool. When something is dynamically responding to a user with personalized outputs, and evolving its presentation and adapting its responses over the course of many interactions, that is no longer a tool, but a role. And as humans, we can naturally begin to think of that software as an entity with agency, whether it’s a virtual companion who is asking to be treated as such (Ursula), a design tool that actually writes the corresponding code and ships a pull request (Dessn), a note-taking AI app that feels like a college-educated assistant taking notes based on your agenda/goals (Granola), or a camera that knows how to compose photos and direct subjects (Alice Camera). We’re interested in the consequences of this shift for both how products are designed, but also how they are distributed and sold.

Value creation at the application layer, however, is reliant on innovation at the interface level. Really specific HCI design and choices at the user experience level drive so much of what is distinctive about the best products in this area. This was a core component of the last camp, and we are seeing some dramatic shifts in the core modalities — (four of the nine companies incorporated voice/audio as a main input, and not one of the companies relied primarily on text-based chat) — and we expect the sensory aperture of these products to widen.

We believe that voice will continue to have its moment — driven by the rise in NotebookLM, OAI’s Advanced Voice Mode, and reports that Eleven Labs has tripled its revenue in the last year. Sometimes it takes longer than you’d think for the products to catch up with the full capabilities of technology like this, and we’d guess that we have yet to see the products that best take advantage of multimodal and end-to-end audio systems. (We’re personally excited about companies from our last camp, Beebot and Ursula.)

Not all companies in the next camp will be voice-powered. However, we’re excited by the concept of more human-like interfaces combined with agentic software that displays cognition, autonomy, persistence, and memory. This allows for some virtual embodiment of this intelligence, perhaps opening up a broader category of artificial life.

Market Changes

More recently, as we see indications of an asymptote, the open side of the ecosystem has gotten closer to parity with the closed models. Competition is white hot and multidimensional, not just within the closed source models but across the whole of the ecosystem, and creates a peace dividend for incredible technologists at the application layer.

While we were early to this thesis, we’re excited to see that we aren’t alone.

“The most interesting layer for venture capital. ~20 application layer companies with $1Bn+ in revenue were created during the cloud transition, another ~20 were created during the mobile transition, and we suspect the same will be true here,” wrote Sequoia’s Sonya Huang and Pat Grady in October.

The market has also produced a cambrian explosion of tooling and platforms built on top of foundation layer tech. There are a bewildering array of model routers, vector stores, tooling and libraries at every layer of abstraction, infrastructure if you want to roll your own, or three or four different varieties of serverless. There are a dozen ways you can run evals and companies that will help you fine tune and run adapters. It can be confusing at times but there’s little that you can think of that isn’t in the supermarket. When a market borders on oversupply, it’s a great time to be on the demand side, and build something on top of it.

Who Should Apply?

AI Camp: App Layer will be foundation agnostic, but will seek teams that are actively selecting and adapting to the foundation layer capabilities with a custom architecture that leverages the foundation models to beat the performance of naive prompting.

Given that the latest models can incorporate large amounts of context into their reasoning, work through complex multi-step processes, and even manipulate existing software and websites (and robots?), we then look at the spaces where there is the most context, and the most complex processes (with the most at stake), and the most desire to turn a tool into a role.

If a single human accounts for X amount of data/context, and Y number of multi-step processes, then what does that say about a business? Dozens, hundreds and sometimes thousands of individuals, troves upon troves of data/context, scores of multi-step, interconnected processes all laddering up to one or two north-star goals that themselves evolve over time.

That’s why we’re incredibly excited to continue executing our thesis around AI at the application layer with a focus on the future of work and productivity for individuals, SMBs, and antiquated vertical industries.

We’ve spent a bunch of time harping on what we think are the best things about the tech. But in a lot of ways the things we are most focused on now are “back to basics” in terms of what we think will win.

Attributes we are looking for:

Products that focus on evident value

  • What the thing does, and why it is new should be obvious, specific, and provable within the Camp program
  • Founders should be deeply familiar with the problems faced by their customers

Products that deliver value quickly and are aligned with well-defined users

  • Customers: SMBs, users and prosumers (Bottoms-up/PLG), even some interesting enterprise stuff is cool

Careful, deliberate thinking about what interfaces best implement the reasoning capabilities in ways that serve users best

  • Have you figured out a new interface or form factor that makes the connection between users and machines better, faster, more fun?
  • We’re really interested in intelligent systems that utilize real time adaptive i/o: end-to-end voice systems, fast copilots, proactive/persistent interfaces

Positioning that allows for focussed data collection and application

  • Even better if you have a theory about how you are going to build a flywheel / moat out of it

Novel applications of AI that reflect a deep understanding of verticals and the pain points of users currently unaddressed by SaaS

  • Founders with the right combination of technical expertise and domain expertise/lived experience

Companies that leverage agentic systems coupled with new business models to explore new unlocks with AI

  • Can AI be sold as a managed service, as employees, as personas, as the output of work it produces?

If you are building something in this space, we’d love to hear from you. Apply here.

How Camp Works

Camp will run from late-February through May, 2025.

Camp consists of 13 weeks in-residence at Betaworks where early stage companies receive support with product development, platform strategy, data science, storytelling, and fundraising from the Betaworks team, dedicated mentors, and our network of industry leaders. Participants get access to the Betaworks shared workspace, located in NYC’s vibrant Meatpacking district, for the duration of the program.

In-person participation is required for the majority of the program — the first and final two weeks are mandatory. Programming includes optional sessions with guests (1–2 per week) and a weekly required all-hands standup. Sessions include speed dating with investors, visits from industry leaders, workshops, founder stories, and live demos. Camp culminates in Demo Day, where each team presents to a room full of investors. See past Demo Days here.

Each participating company will receive a guaranteed investment of up to $500k. Betaworks Ventures will invest up to $250k on an uncapped SAFE note with a 25% discount, and receive a 5% common stock stake in the business. Our syndicate partners will be adding up to $250k total on uncapped SAFEs with the same 25% discount. To summarize, participating companies will receive an uncapped SAFE note with a 25% discount from Betaworks + our syndicate partners, and Betaworks will receive 5% of the company’s common stock. More details here.

Betaworks has been investing at the forefront of ML/AI since the launch of its first fund in 2016, with portfolio companies that include HuggingFace, Nomic, Flower, Granola, and more. To learn more about Betaworks, we recommend visiting our community space in NYC during one of our regular public events. You can sign up here to stay in the know: beta.works/bytes

REad more

Back to writings