10.30.2024

Meet the Nine Native AI Startups Launching at Betaworks Camp Demo Day

Jordan Crook

We’re excited to reveal the full cohort of teams from our latest AI Camp: Native Applications.

These nine teams are building at the application layer, unlocking new, native user experiences only made possible with AI. Read more about how we picked this Camp theme here.

We tend to be drawn to founders who bring a deep understanding of a problem based on direct, personal experience and who can exert their personal taste when coming up with the solution. The founders in this cohort have all lived the problem they are tackling.

We asked each team the same set of questions, about the problem they are solving, why they are the team to solve it, their target customers, and their business model. Read on to learn more about each one!

Alice Camera

Alice Camera is a AI-native camera built to make everyone a professional creator.

Founding team: Vishal Kumar, Vikas Kumar

The Betaworks POV

A big part of our thinking around this camp was that there would be some blend of copilots and agentic AI living at the application layer, but Alice Camera represented the first concrete example of active intelligence transforming what once was a tool to be wielded by humans into an active participant alongside humans. We were thrilled by Vishal’s vision to allow a camera to actually become a photographer.

What is the problem you are trying to solve?

High-quality photo and video content creation is more necessary than ever before, for both creators and brands. The problem is that professional cameras are expensive, out-dated and complex. But, capturing footage is the first part of the battle, but the real challenge comes after. Content creation has many fragmented steps — from transferring files, to colour grading, to video editing and visual effects — that can be really time consuming and expensive.

How are you solving the problem?

Alice Camera is an AI-native Micro Four Thirds mirrorless camera that attaches to the user’s phone. It allows everyone to capture professional quality content and has an easy to use interface via your smartphone’s screen. And, it’s built for the AI era — it’s the only professional camera on the market with a powerful Qualcomm Snapdragon chip and a TPU from Google.

These chips allow us to run AI algorithms on-device to automate complex camera functionality. We’re also working on active intelligence for Alice Camera, controlled entirely by your voice. It handles those tedious post-production tasks directly within the camera itself, helping streamline workflows like never before. Our localised camera assistant uses an LLM to do essential content creation workflows; we envision Alice Camera becoming an intelligent and active participant in the content creation process.

We’re not just building a camera that attaches to your phone — we’re bundling essential content creation hardware and software into one seamless package. Alice Camera will be a content studio in your hands.

Why are you the right team to solve it?

We’re a team that understands this market. Vishal, CEO, worked as a data scientist at Sotheby’s but built a side-hustle as a creator with over 30,000 followers. Our CTO Liam is a PhD second-time founder who’s built consumer electronics for creatives and previously studied at Oxford University. Vik, our COO, previously worked at JP Morgan. Ollie previously built a camera at the age of 17 and also studied at Oxford University. And, Maiya is a creator with over 100,000 followers on her personal accounts.

Who are your target customers?

Primary target customers are the 200 million creators and businesses (owners and market executives) who want to create high-quality content for their personal brands or on behalf of others.

What is your business model?

The Alice Camera hardware will cost our users $900 and to get access to our advanced editing software we would look to charge a monthly software subscription fee when we launch it in 2025.

ESAI

ESAI empowers Gen Z to craft their personal narrative for college apps and beyond.

Founding team: Julia Dixon

The Betaworks POV

In a world where hard skills will be democratized via AI and optimized workflows, the real mark of ‘talent’ will be in a person’s soft skills. What is the ‘X’ factor they bring to the job around their creativity, their judgment and decision making, and their taste? Julia comes from a career in college advising and counseling and understands not only how to democratize access to these services via AI, but how to prepare Gen Z for the evolving future ahead of them and help them tell their story and build their personal brand.

What is the problem you are trying to solve?

Over half of today’s job skills will be automated within a decade. As soft skills become the core differentiator, Gen Z’s ability to build and communicate their personal brand becomes crucial for future success. Yet nearly 70% of Gen Z students struggle with the first major test of this skill: crafting a compelling personal narrative for college applications. With public school counselors serving an average of 400 students each and private college advisors charging upwards of $300 per hour, a major inequality has emerged at precisely the moment when students first need to articulate their unique value proposition. Without accessible tools to help students uncover and showcase their authentic story, this unequal playing field threatens to follow Gen Z from college admissions into their careers.

How are you solving the problem?

ESAI democratizes personal narrative building with ethical, AI-powered tools that help students spark, sculpt, and showcase their authentic story. Starting with college applications, the platform automates the expensive advising experience by helping students uncover unlikely connections across their experiences, transform raw materials into compelling narratives, and adapt their story for different opportunities. This builds the storytelling muscles students will need throughout their careers while making professional-grade guidance accessible to everyone at a fraction of the traditional cost.

Why are you the right team to solve it?

As a former college advisor, Julia saw a major inequality emerging as only the wealthiest families could afford resources for their students to stand out and get into the most competitive colleges and universities. She created ESAI to help level the playing field, so students of all backgrounds and resources could have a fair shot at building a story for their future. Leveraging Julia’s background in marketing and Gen Z community-building, ESAI went viral to over 20 million students, helping over 250,000 in their admissions journey over the last year.

Who are your target customers?

Gen Z, starting with undergraduate applicants in the US and international high school students applying to US colleges and universities.

What is your business model?

ESAI is a B2C, freemium subscription model. We work directly with students and families so we can grow with our users throughout their early career journey. Using a social feedback loop, ESAI creates customized sharable assets for each user with the goal of being the place students build and share their story over time.

Additionally, ESAI has some key distribution partners like national nonprofit, American Student Assistance.

Autoplay

AI that understands user intent and UI to power AI agents that help users navigate and master software in real-time.

Founding team: Sam Nesbitt, Marie Gouilliard,

The Betaworks POV

It’s not often that you come across a team that spikes in opposite directions with such a clearly unified vision for the company, but we couldn’t resist that lethal combination when we met the Autoplay team. Marie incorporates a handful of bleeding edge technical approaches — blended data inputs, inverted RLHF, and the latest research around agents from the gaming world — while Sam continues to get the highest cold DM open rates on LinkedIn that I’ve ever seen. We were completely unsurprised when their pre seed round came together in a matter of a couple weeks.

What is the problem you are trying to solve?

Autoplay is solving the problem of product adoption and user engagement for software. The core issue is that users often don’t know what they don’t know, meaning they struggle to fully utilize the software because they are unaware of key features or how best to use them. This leads to poor user engagement, and ultimately churn.

How are you solving the problem?

The product leverages self-driving technology to learn the software and integrates with session replay databases to understand user intent. The AI predicts user goals at both individual and enterprise levels, guiding users through the software in real-time; showing only relevant information based on their needs and offering insights into how others in the organization use the same tools.

Who are your target customers?

B2B SaaS companies

What is your business model?

Usage model — based on the amount of input data (session replays) to train the AI for each software.

Ursula

Ursula is an artificial life lab.

Founding team: Pedro Lucca Denardi Passarelli, Jonathan Celio

The Betaworks POV

We all grew up with stories and became attached to the characters in those stories, whether it was comic strips or Disney movies or video games. But once we reach a certain age, a part of our brain knows that those characters are fiction, no matter how emotionally attached we are to them. When we met Pedro and were introduced to Ursula, it was the first time we were not only tricked into believing that this character was ‘alive’, but we were convinced by the genuine belief of the founder that he could create artificial life.

What is the problem you are trying to solve?

Creating lifelike artificially alive characters has been treated purely as a computer science problem. We believe that character development in any medium is fundamentally an artistic process. We believe you need to combine art and technology to create life.

How are you solving the problem?

Our proprietary cognitive architecture is a mix of agentic LLM behavior and The Sims-inspired symbolic AI. These creatures have emotions, experience needs, form memories, and exhibit unique behavioral patterns, while having the ability to interact with the world around them through their animated bodies.

Why are you the right team to solve it?

We are an Emmy-Award winning team of creative technologists who worked on some of the world’s best gaming companies, generative media startups and animation studios.

Who are your target customers?

Our first creature is Ursula, a companion for kids, but we intend to launch different characters for different audiences over the course of the company’s life.

What is your business model?

Monthly subscription for some of the apps but we fully intend to monetize our IP down the line (merch, licensing of characters and tech)

Dessn

Dessn enables product designers to contribute to product building, without coding.

Founding team: Gabriella Hachem, Nim Cheema

The Betaworks POV

It’s absolutely wild that more than a decade after the design revolution, with companies like Figma seemingly taking over the world, that there are still massive pain points around the hand-off between designers and developers. The Dessn team excited us because they are capitalizing on the latest in AI to deliver a value proposition to designers and developers that is highly complex on the inside and dead simple on the outside. Often when you integrate AI into a system, you allow for probabalistic results rather than the rigidity of deterministic results, and yet the Dessn team has found a way to give ultra fine tune controls to designers — which is obviously a requirement given designers are some of the most intense control freaks on the planet — while still leveraging the speed and power of AI to make their lives easier.

What is the problem you are trying to solve?

Only developers can contribute to product building right now, creating a huge bottleneck/dependency. This results in 1) a slow and painful back & forth/feedback loop, 2) Tradeoffs on product quality due to “limited resources”, and 3) Different sources of truth between devs & designers/PMs.

Current design tools live very far away from production and have no limitations/constraints that code actually has. The handoff process that exists between the product functions results in tons of info being lost in translation, and therefore a lower quality product.

How are you solving the problem?

Our product is a Chrome extension that overlays on top of your live app (staging or prod), and enables you to make UI changes. Our AI takes on the “burden” of writing the code, and pushing straight to the codebase. Devs just have to approve the code to get it live.

We’re building the translation layer between designer and code, and basically turning any designer into a design engineer.

Why are you the right team to solve it?

Gabriella and Nim have been working on product teams together for the last 7 years. As Head of Product, Gab was constantly frustrated on not being able to directly execute on the decisions she was making. Things were constantly getting lost in translation during the handoff process, resulting in lower quality product building through a slow and painful process. Nim has always lived at the intersection of design, product, and engineering and has always wanted to find a way to enable more people to be there as well. As Head of AI at Planned, he dove deeply into the world of LLMs and realized that this was the technology that could finally unlock what both of them (and the market) have been waiting for.

Who are your target customers?

Target customers are SaaS companies around Series A-B.

What is your business model?

$99/month for automated component mapping, unlimited users and changes.

Sarama

Sarama has built the first system that uses dog vocals to give the deepest insights about their health, behavior, and preferences.

Founding team: Praful Mathur, Shathabi Ravindra

The Betaworks POV

AI is making parts of the world legible that have never been legible before. When we launched this Camp session, we knew that to be true, and we became fascinated with non-language-based transformer models… physics engines, weather models, etc. But we never would have expected that our dogs’ barks were going to be part of that equation. The team at Sarama has the right blend of technical and AI expertise, GTM, and passion for this space to deliver a collar that can eventually translate dog vocalizations into human language and we can’t wait to figure out what our dogs are actually trying to tell us.

What is the problem you are trying to solve?

We’re using our ML to co-develop a language between dogs & people to establish interspecies conversation.

How are you solving the problem?

Our smart collar uses continuous passive audio monitoring to encourage interaction between dogs and their owners, enhancing model training and vocalization translation. Paired with our app, it delivers personalized insights into health, behavioral triggers, and routine anomalies to allow dog parents a deeper understanding of how their dog, both emotionally and physically.

Feature list:

  • Sleek, lightweight, minimal collar that has microphone, temperature, humidity sensor, heart rate, GPS, LED for ambient emotion display, ML chip, and bluetooth/wifi
  • Privacy focused — filters out human speech, keeping only dog barks and important environmental triggers. Plus, some of the ML runs on the device, so the data stays private and secure
  • The collar doesn’t have a D-ring so it’ll be purely functional for monitoring

Why are you the right team to solve it?

Both Peter & Praful have focused on animal communication for years in research capacity. Peter’s work directly contributed to the funding of Earth Species Project & establishment of Project CETI. Additionally, the founding team has been deeply involved in founding multiple startups & scaling established multi-billion dollar companies. Our singular goal for 2025 is to maximize adoption of our product to bring in data to improve our models.

Bios:

  • Praful is a 4x founder & early-stage investor that’s worked across logistics and deep tech. He was involved in passing a mandate in Boston around taxi regulation, passing Senate Bill S-734, and organized the first Uplift Series of talks in SF last year.
  • Abi is a full-stack, technical growth marketer and startup advisor with over a decade of experience in accelerating revenue growth at consumer tech and SaaS companies
  • Peter is one of the leading experts in machine learning with a focus on animal behavior and bioacoustics processing.

Who are your target customers?

  • Dog parents who dote over their dogs especially those with additional needs e.g. disabled dogs, dogs with mobility issues, dogs with separation anxiety, adult dogs, starting to show early signs of aging or have continuous health issues.
  • The people we love to interact with buy clothes, premium food, toys & treats that leave their other friends questioning their sanity, and those with an obsession to provide the best care for their dogs.

What is your business model?

Subscription based model. We charge a $35 subscription with a 12 month commitment upfront.

Tato

Tato simplifies complex IT projects with auto documentation & AI powered insights

Founding team: Justin Delisle, Vlad Lokshin, Mathieu Chretien

The Betaworks POV

One of the things that these LLMs are very good at is making sense out of high volumes of unstructured data. When we got to know the Tato team, and learned that Justin is a Microsoft distinguished wizard ninja of enterprise architecture (™), we were able to go deeper than we ever have around a seemingly unsexy space: enterprise IT and ERP consulting projects. But the more we investigated, the more we realized that the sheer volume of unstructured data generated by this industry is the perfect fit for a highly customized, AI-powered tool. And on top of that, the Tato team, with vast experience living the problem, was the perfect team to deliver that tool.

What is the problem you are trying to solve?

Complex IT projects fail because people can’t keep up with everything and everyone on these hundred person transformations.

How are you solving the problem?

Tato is added to project interactions like meetings, emails, documents, and project management apps. It auto documents the project and gets the right insights to the right people at the right time.

Why are you the right team to solve it?

  • Justin Delisle, CEO, software engineer with a decade of experience implementing ERP projects and acting as practice leader at a consulting firm.
  • Vlad Lokshin, head of product, has built multiple products 0 to 1 with many millions in revenue.
  • Mathieu Chretien, head of GTM, took the US market operations from $0 to $200M at previous startup.

Who are your target customers?

Consulting firms who work on complex IT projects

What is your business model?

B2B SaaS Subscription.

Hopscotch Labs

Hopscotch Labs is building a City Guide for Airpods. It’s called BeeBot.

Founding team: Dennis Crowley, Max Sklar, AJ Fragoso

The Betaworks POV

Dennis Crowley has been working for 20 years on software that gets people to look up from their devices and experience the world and people around them IRL — Software for the Streets. At each new phase change of technology, from SMS to smartphones, he has capitalized on the latest unlocks toward high-utility and delightful features for end users. With BeeBot, he’s doing the same by leveraging the proliferation of Airpods and the rise of AI and we’re excited to be along for the ride once again.

What is the problem you are trying to solve?

We are continuing the “software that makes cities easier to use” mission from both Dodgeball and Foursquare. We want to make people more aware of their surroundings and more connected to their neighborhoods/cities.

How are you solving the problem?

BeeBot is an example of “an app you don’t have to use”. Once you install the app on your iPhone, BeeBot “turns on” whenever you put Airpods / headphones on. Once it’s on, it’ll occasionally augment your walk with info about what’s nearby — places, people, events, etc. It’s not a walking tour (as it’s not telling you where to go), but more of a “walking assistant”. It’s designed to be proactive and lightweight. You may only hear a few messages per day, and the messages are designed to be short and non-obtrusive (think: two sentences).

Who are your target customers?

Anyone who walks around with Airpods. :)

We’re building for people who live in cities, biased towards locals instead of tourists.

What is your business model?

TBD for now, but most likely subscription ($x/month), plus relationships with local merchants and content providers.

Unternet

Unternet is building Operator, a new, intelligent client for the web.

Founding team: Rupert Manfredi

The Betaworks POV

A core focus in this camp is innovation around interface, and how to bring agentic, AI-powered touchpoints to average consumers. If the main surface area for most users is the internet, we had a strong suspicion that it would be a browser, but hadn’t seen a distinct vision that was attuned with the tenets of the web until we met Rupert. He had been tinkering, sketching, and strategizing his vision for a year while working full-time at Adept and was ready to take the plunge. He’s fully convinced us that the personal computing revolution hasn’t yet begun.

What is the problem you are trying to solve?

All our software today basically exists in “dumb rectangles”, whether that’s windows in your OS or tabs in a browser. These windows are oblivious to what’s going on inside them and what actions apps can take, or who you are and what you’re trying to accomplish.

Why is this a problem? Because any task on our computers involves numerous steps and requires lots of context, but our software environment doesn’t track any of it. We end up manually browsing web pages, searching for menu items, re-entering information, logging on to multiple services (and scattering our personal information across the web in the process). Your computer can’t see the big picture, so you’re stuck managing all the little details.

(A simple example: comparison shopping means manually juggling tabs for different sites, tracking prices and reviews, while cross-referencing your budget spreadsheet — all tasks your computer could help with, but doesn’t.)

AI has the potential to solve this. But while we have the raw models, we’re missing the software building blocks to bring this vision to life. And we need open standards anyone can build on, just like the web — so this becomes as ubiquitous and accessible for all, beyond any single company’s reach.

How are you solving the problem?

Like all good stories, our secret plan comes in three parts:

  1. Building a new form of web application — “web applets” — that can be understood & used by AI, while preserving your privacy and data ownership. An extension of the web, and an open standard anyone can build upon and contribute to.
  2. Building an intelligent client for the web, that translates user intent into actionable steps within these applets
  3. Establishing an ecosystem of services to make it easy to build and distribute this software

Why are you the right team to solve it?

Rupert Manfredi has been working on the fundamental problem of how humans will interact with AI systems for over 6 years (long before it was cool). His diverse experience spans collaborating with ML research teams at Google, innovating on generative UI & browser technologies at Mozilla, and developing workflow augmentation tools using large, multimodal action models at Adept alongside researchers from OpenAI, DeepMind, and Google Brain.

Who are your target customers?

Now: early adopters, and developers interested in building with web applets.

In the future: everyone who uses a web browser to do things.

What is your business model?

There will be a set of services that we can provide, which — while not mandatory — will be a great default for most users and developers building on this platform. In particular:

  • Hosted, privacy-preserving model & sync
  • High-quality information API, for getting better answers than regular web search
  • Third party payments, both for developers and web publishers (who need to be paid for their work!)

REad more

Back to writings