The Next Phase of Agentic AI: Social Intelligence
About This Episode
In this episode of the AI Agents Podcast, host Demetri Panici sits down with David Petrou, founder and CEO of Continua AI, to discuss the evolution of AI agents, multimodal models, and social AI.
They explore how agentic systems are reshaping collaboration, productivity, and human connection, sharing insights from real-world experience building AI products and lessons learned from past innovations.
David shares his journey from Google, where he spent nearly two decades working on AI and machine learning, to founding Continua AI, a company focused on creating AI agents that actively participate in group conversations, nudging people to connect and collaborate more effectively.
They discuss the nuances of AI development, from early experiences with machine learning and language models to the latest breakthroughs in reasoning, tool usage, and zero-shot learning.
David and Demetri also cover technical insights, including AI hallucinations, token efficiency, latency, cost optimization, and on-device processing, providing listeners with a clear understanding of the opportunities and challenges in building intelligent systems today.
This episode is a must-watch for anyone interested in the future of AI, productivity, and social intelligence—whether you're a developer, entrepreneur, or AI enthusiast—and how cutting-edge AI can elevate both personal and organizational performance.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
00:00 – Proactive AI agents and Slack integrations
03:18 – Introduction to the AI Agents Podcast
05:15 – David Petri’s background and journey into AI
09:37 – Machine learning, humility, and agentic intelligence
14:06 – Why early AI ideas failed and lessons from Google Glass
18:36 – Multimodal AI and real-world use cases
23:03 – Hallucinations, tokens, and how LLMs actually work
27:03 – Cost, latency, and the rise of on-device AI
31:20 – Designing social AI for group conversations
36:48 – Product vision: AI that enhances human connection
41:55 – The future of agentic AI and collaboration
47:32 – Final thoughts on innovation and timing
52:11 – Episode wrap-up and closing remarks
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://www.jotform.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
even more agentic connections. We we so we have a Slack integration. It's internal yet launched. It's working great. It has access to our code, our logs on GCP. When we're debugging something, it helpfully chimes in. We had this um incredible moment, I guess maybe two weeks ago, where just out of nowhere, like it was not even prompted in any way. It just preemptively or proactively said, "Hey guys, uh I just noticed that Gemini 3.0 0 drop. Can you upgrade parts of me to use different parts of that? So, [laughter] you know, at the moment it has view access to our code. You could imagine a f a future where it has edit access and it can sort of perform brain surgery on itself. This this type of Slack integration as something that could be really really useful for different organizations. Hi, my name is Demetri
Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI Agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello and welcome back to another episode of the AI Agents podcast. In this episode, we have David Petri, the founder and CEO of Continua. How you doing today, David? >> I'm doing great, Dimmitri. How are you? I'm living the dream. Tell me at first just a little bit about um your background and and how you kind of got into AI itself. >> Oh boy. Uh probably as a kid reading a lot of books, a lot of books on science, computers, devoured manuals and so forth. Um taught myself programming, typing programmers
from the backs of computer magazines like family computing and compute. Uh reading get electrobach by Douglas Hoffsteader. Um that was uh something that really opened my mind on all the strange loops and all the interesting phenomena that comes out of emerging technologies, artificial life, things of that nature. >> Uh I went to school for computers both at Berkeley and Carnegie Melon. And uh yeah, I got my PhD at CMU. Then um right after that went to Google. I was at Google for uh I guess almost 18 years. Um and uh yeah that was uh pretty eye openening. I felt like I had a front row seat to the future of technology. Got to work on a lot of interesting projects. I was in machine intelligence um AI ML that that type of thing for my last 10 years or so at Google. Uh so yeah
it was um really interesting time still is really interesting. That's why I started Continua. That's why I left Google to start Continu just that the rate of change of technology right now is faster than I've ever seen it in my whole career and um just being a part of a dynamic environment at a startup seemed exactly the right thing to do in 2023 when I started Continuum. >> Awesome. That's amazing. Um well, I really think that you know after taking a look at your background and stuff I got some some questions hopefully will be tailored to you and interesting. Um, I kind of want to start uh with your, you know, first, I guess, for into how you got into this concept for Continua, right? I know that it's the tagline's AI that joins the conversation. Um, you kind of are >> trying to make
it a more personal and close, I guess, what's the word I'm looking for? um experience to what people would usually interact with, right? I see on the homepage just text conversation as the primary thing and a lot of people are in their text all the time. So it seems like a natural place to place AI. Tell us a little bit about how it works and and how even the I before that how did the idea come to you? >> Yeah, so a [clears throat] few things I I think it's it's also useful to talk about what it isn't right. So one of the ways in which we think about what we do at this company and um sort of goes through our ethos and our values and our mission is is about not creating a companion, not creating a friend. Uh a lot of companies
are doing that. Um we do not look at AI as a replacement for human connection. >> Sure. And what we do at Continua is create an agent that tries to elevate humanto human connection. So if you think about times where um you might have a friend but you're just really busy in life, you don't think to text them to, you know, go out get some beers or whatever. Continue as something that lives in your group chat and it can give you those nudges, you and your friends nudges to actually get out and do something together, go out and touch grass, so to speak. Um, so I think that's, you know, an important positioning and important distinction and it comes up throughout how we develop our product and think about how to create an AI that really helps the fullness of human human to human connection.
Now about the idea itself, you know, we should even talk about what does it mean to come up with ideas to work on in products and I saw a lot of this at Google. I saw a lot of this I see a lot of this at Continua. you want to be in a position where you're coming out with something that's not too early and not too late and the calculus is very different um at big tech versus a startup that has really limited resources, right? So to us it seemed quite clear that there was something missing. If we think about the revolution in uh how we experience large language models through chatbased interfaces, it's been amazing. It's changed the world, but it's always been in the context of what one might call single player mode. The human says something, the AI responds. It's a sort
of call-in response. Um, it's great, but it leaves a huge amount of opportunity on the table. People are social creatures. We like to get together. We're motivated by each other. We're inspired by what people do. >> Um, and so we asked ourselves, what would it mean to bring an AI into the context of these areas where people are already congregating? And when we started this project, it seemed to me, boy, we had to be too late. Why hasn't someone else done this? >> Sure. [laughter] >> Yeah. >> Yeah. Right. I mean I mean a lot of people are making a lot of cool stuff now. So it's like how's no one come up with it is I feel like a common thing for any good new idea. >> So there's a lot of theories on do you want to be first um do you want
to sort of establish the environment or the ecosystem or the platform or whatever or do you want to be a fast follower that learns from the mistakes of the person who's first? In any case, this specific idea, it's not an easy one. It's it's non-trivial. There's a lot of problems that you have to solve to get it right. Um, we think we have the world's best social AI. That's how we're defining this space. Uh, there's a long road to go. Uh, and it's a very exciting place to work. So, I I think in terms of our timing, we were neither too late nor too early. We were we were just at the right time. I want to take a step back as well when it comes to AI in general for you because obviously you would describe yourself as um agentic. Is that fair? I
mean I feel like a lot of people are using the term. Would you or would you not It says text agent on the website, right? You describe yourself as agentic to an extent. >> Well, I we describe our product as as >> that's what I mean. Not you. Yeah. The product. Why did I say Yeah, I I kind of interchange those terms. >> I think I as a human I I have agency. I you know >> Yeah. No, you do have agency. Wow. I wasn't asking an onlogical question. Um I was asking a uh uh more about the product itself. So you would describe it as agentic. Um what would you say you've noticed um just to kind of go back on your journey with AI, right? Chat GBT 3.5 comes out in what 2023 fall 2022 fall I think. >> Yeah, something like that.
I was I was playing around with GBT2 I think in 2019. I I might be getting the dates slightly wrong, but um or maybe it was a few years after that. Anyway, um [sighs] yeah, I mean there was something really magical that happened and I don't know I I feel like ML is the humility that we as programmers or as software engineers are not able to craft programs ourselves to solve problems. uh if you think about you know what would it take to write a program that could distinguish between a cat and a dog no one could do it. Instead what ML uh or what uh you know this this branch of ML happened to be was us having the humility that instead we had to create a program that could create a program. Uh and by creating program it's you know it's finding good
values of parameters in a neural network to to actually solve a problem like that. >> Sure. And in some ways it it might be the case that most problems out there are of the sort that you can't solve by brute forcing it. Almost like how um there are more um uh transcendental numbers than non-transal numbers. That that kind of thing, you know, it's kind of like a non-obvious statement. I feel like um we're also in a mode where collaborating with these AIs um is the humility of that it it sort of puts a it points toward the fact that even in designing systems or developing the architecture or the architectonics of it, the complexities that are out there are so big that uh we need to partner with with this new intelligence. It's a foreign intelligence, but it's an intelligence nonetheless. So, we can go
deep and we can go on the timeline if you like. We can talk about what does it mean to be a researcher? What does it mean to be a software engineer? Um what we're trying to do at Continua is bring the best aspects of this technology out in ways that can help people in their lives. Uh, but I I have to agree with what I think you're getting at, Dimmitri, which is that this is an incredible time to be alive and see these technologies um start to develop and be part of the development of these technologies. >> Yeah. No, that's that's uh that's fair. And and the reason I kind of ask this is an angle I'm going at in a lot of recent interviews. You know, you said you're playing around with two uh with JGBT. When did you start to notice from the
level of your expertise in AI that the models really started to gain um the ability to become agentic? Right? You know, obviously 3.5 was just great for little outputs here and there. Um was it kind of when 03 started to have reasoning and then like 4.1 was like a mini fast reasoning? Where did you really start to see that capability come to fruition? >> Well, I I think it also depends on what we mean by agentic. um if we want to be you know I'm interested in hearing uh your definition or how you want to to guide the conversation but for me it's like okay um can a model use a tool um can a model create a tool that it uses uh can a model understand APIs that are out there can a model search for APIs out there you know there there's a
lot of different levels to this I I think maybe >> um there wasn't one light bulb moment, but there were a few light bulb moments. I remember being at Google um around the time that we started to see zero shot learning um work and I remember looking through some code that a colleague wrote and there were instructions in a prompt, you know, that sort of guided um the output of the inference stage to do something in a certain way. And it was like, can this be real? It seems um totally insane and totally beautiful and and just so many lights went off when when I saw that moment. >> Okay. Well, yeah, it's uh that's a really good insight. I I always kind of like to hear where people originally, you know, are having this this this light bulb moment go off in their head
about not only their own thing, but AI itself, right? and like where AI itself becomes uh at that level because a lot of people I think had some really cool early expectations that weren't obviously able to be met after everyone did some tinkering. Um so yeah. >> Well, I I I think maybe several projects in in my career at Google were were of the sort. Um uh what what's the expression? The eyes bigger than the stomach, so to speak. We're like there are things that we wanted to do that just weren't possible at the time. I I was on the founding team of Glass, the wearable computer with the trackpad on the side. And >> yeah, >> there were so many ideas that we had around, you know, things like, well, what about a use case of um asking Glass where you left your keys
if you misplace your keys? Well, okay, how do you do something like that? Well, first of all, um it has to be on all the time so that it could see where the keys last were. Well, you can't do that in a product with really limited battery that's overheating all the time. if you have the camera on all the time. Um, but there's worse problems. Okay, now you have to do computer vision. Now, something like keys at the time in I guess that was like 2012, 2013, um, maybe possible to identify the class keys. Maybe, maybe not. Back then, computer vision was well, okay, let me correct a bit. like sort of when Alex alex net imageet stuff w was starting to kick off. But we were using technologies that were a lot better with planer two-dimensional objects um doing um feature recognition um and
uh you know histogram of gradients type stuff and so it was really good with things like CD album covers or book covers uh not so good with contour defined objects. Okay, but let's assume that you could, you know, recognize something like keys. Now, you have a problem where you have to state your intent. You know, the the fact that um you're missing your keys and you want to find it. Um it sounds like a simple search problem, but people can say things in a million different ways. Um how do you make that fast, efficient? Well, you know, there were techniques that could work, you know, okay, but not so great. This was pre LLM, pre the understanding of natural language the way we we have it today. So so many challenges compounded on I I I can go even deeper on on all these challenges
but the the reality is that we are in a much better place today given um the progress that has been made across all these different domains and especially I'm excited by the multimodality that we've been seeing um across images, video, sound, text um really it's all thing and the more that you can train on multimodal inputs, the more network is going to have internal representations that are able to support each other and and help it do a much wider variety of task. >> Yeah, it's uh that's fair. And I think multimodality is actually getting improved uh more and more in a lot of cases. I I um I know that there's for example, I run a media agency, right? And we have a thing that just occurred with us which I so I had a training course for I we make tutorials for for SAS
companies essentially and like that type of style. So it's a very specific style of edits, right? So I had to make like a training course for any new editor I've had on there. And Gemini 2, I made like this whole bunch of like mini Gemini agents because at the time Gemini I think was the leader and is still the leader um in um sort of like vision recognition of like on-screen assets and stuff like that. Um, when it comes to Gemini 3, I think this the marker I saw was that it went from like 11% to 70% unlike >> Yeah. Yeah. And actually, shout out to to Abanchu and team who worked on I I know the team really well. >> Oh, you do? Okay. >> With that stuff, the screen understanding work is >> incredible. It's something that has been >> on my mind
for a very very long time. I I feel like the what comes off of screens and being able to understand it has been so key and we're seeing so many beautiful computer tool use um use cases uh right now that's built on that technology and I agree with you Gemini 3 is is the definite leader in this tech in this space. Yeah. And let's talk a little bit more about your Google experience um in a sec, but just to kind of finalize that thought. Um what happened was I I you know when I was I'm trying to wrap up that original thought that I had about how things weren't really able to be agentic, let's say, uh from a reasoning standpoint two years ago, but then now we're getting there. Um and it h it is possible. uh with Gemini 3 going from 2.5 I
actually had a hour two hour long um course with like modules of how to properly create a tutorial edit right I've made like 2,000 of these so like I really just blade it out and anytime I get a new video editor for the most part they do pretty well and they're onboarded within like a week because they just have to watch the course follow what it is use our little template and then we're they're good to go. I was like there's another side to it which is like quality assurance of video editing. I was actually able to just make a bunch of mini very detailed prompts with like a system prompt and then like a uh you know asset level prompt and whenever somebody uploads a video for review now I have like a Gemini agent 13 steps go through each of it but in
2.5 it was not working and a lot of people don't realize the tech might just be behind your idea like the models just might be behind your idea you know what I'm um the models might be behind versus where your idea is at. Was that I think I said that right. >> Yeah. [snorts] Yeah. Yeah. No, but I I I get your point and um I there might be more you want to add here, but if I could just >> No, you go. You go. Yeah. >> Yeah. I I I think it's um how do I how do I put this? It's almost like the concept of bootstrapping or, you know, picking yourself up from your own bootstraps. um being able to either develop these modules or in the case of you know a startup um develop the code itself being able to make it
as closed loop as possible. These are things that that we obviously do at Continua and a lot of companies do. Uh being able to have AI um evaluate the quality of your user interface of your product, right? And and you have to be really careful like you were saying that um the AI is capable of doing that. If if you if you try a model that's you know not quite up to snuff then you might be um set down the wrong course or the model might not be helping you. It might be wasting time and money. Um it would be better for you to evaluate it yourself as a human. But once we get to a certain level, and I feel like we're almost there, then you could do a tremendous amount of loop closing so that the development of your UI and your evaluation
of your your UI could happen overnight while you're sleeping, right? And you wake up the next day and you have something beautiful that's been created and evaluated by by the best AIs that are out there. >> Yeah. No, absolutely. It's um it's an interesting spot to be in right with um with these things right now because the capabilities like I said could be slightly behind it. So kind of furthering my question where have you kind of seen some of your ideas and what you guys are doing um come to fruition more and more because the tech has been able to kind of enhance what that initial idea was whe maybe the pricing started to make sense. You know, a lot of tools whether it be claude um recently releases 4.5 opus is the best coding model and their price what like third went in a
third or like halves or whatever right from an API standpoint. So things like this happen often like what what were there anything has there been anything like that in the last little while that has kind of allowed you to confidently add a new feature or something to that effect. I'd love to get into more cool features that you guys have as well. In in some ways the whole decision for me to start continue well when I did was a you know a bet or a prediction on on the trends right >> okay sure >> um and that goes in I guess there's two axes there maybe three one is u the quality of the models right so back in April 2023 when I started Continua um hallucinations that that was the big issue >> so um you had one camp saying um well this is
inherent it's never going to go away um and that sort of puts a ceiling on on the utility of these models. Another camp uh was well look there's been a lot of progress really fast um a lot of really really really smart people are working on this problem. Um it will go down and and I think it has gone down. Um >> absolutely. >> Yeah. I mean, and the whole thing about hallucination is it it's like I mean, all models do is is just predict the next token. So, to the extent it gets it right or gets it wrong, it's doing the same thing. So, hallucination is is probably the wrong mental model um for for how these things work. >> Um, >> wait, explain what you mean a little bit more by that just so everyone can kind of get more context and then
feel free to continue. >> Yeah. Yeah. It's not that like how do I put this? Um, and by the way, continue as sort of a play on on the whole notion of what an LLM does. You know, it just continues what you give it. Um, the whether or not a right answer, wrong answer comes out, it's it's not like we should be amazed just that it gets the syntax of language correct, that it gets the physics of of of how these things work. And then if if it says something wrong or says something right, we just have to think about well what went into the training data. Um how does this thing work? How does it sample you know the distributions coming out to you know to actually continue the temperature settings all of that. When you understand how these things work it it sort
of demystifies. It's like well okay of course it's going to say some right things some wrong things. Now, let's build up um systems and protections and guard rails that get it to do more right than wrong. It's not like u the thing has some like extra capacity to hallucinate in a sort of anthropomorphized way. So, I guess what I'm saying is hallucination is a good um sort of like um shorthand linguistic shortorthhand for some behavior that we don't want. but it has no analogy to what a human might be doing. >> Okay, now that makes more sense. Thank you for that context. >> The second axis would be uh cost as you mentioned and and yeah, and that was also a bet that I made at the time. Now, the thing about Claude 4.5 Opus thinking I I don't know if it's Yeah. So, at
least what I read recently, and I could be wrong, is that it's not that it's necessarily cheaper on a per token basis, but that it uses fewer thinking tokens than the previous model. So, it winds up being cheaper in the end, but um I'll I'll have to uh check the the actual token costs. I think that um it's quite clear that the big tech companies are you know um doing this uh the price is being lowered because um first of all they become much much more efficient in how they're doing things and that's the whole thing about what can big tech do at scale that smaller companies cannot do by any stretch of the imagination and this has been wonderful for the the startup ecosystem. Secondly, the um development and further research and further deployment of ondevice technologies is really really key. This is something
that I worked on a bunch at Google. Um what would it mean for um a lot of this computation to be pushed at the edge right there? There are a lot of these devices out in the world. Um they have they all have some sort of acceleration of uh what is required to do machine learning type inference on them. Um and there are latency benefits from doing so. there are um obvious cost benefits to a company because you're you're essentially saying, "Okay, well the energy that's going to be paid for by the consumer when they plug their phone in the wall and charge their phone at night." Um there's also really interesting um aspects around where the data is going and what is processing uh what is doing the processing on the data. So, at the moment, of course, the the technology isn't quite there
for the um for getting the highest quality results, right? You're always going to get a higher quality result using a server um to do inference, but over time there's an increasing amount of computation that can be done at the edge. Um, and I think that is another real bullish reason for for why I started continue at the time I did because it's becoming easier and easier to do that kind of offloading. Uh, and then the third access is is latency. You know, these models are getting uh faster. I mean, you have to be really careful on how you use context. Um it's not a panacea to have an incredibly large context window because um you know you're going to suffer in terms of uh latency at a certain point. Um and also you can suffer on quality because you're requiring the model to do more
work to separate the wheat from the chaff. But yeah, I I think um you know just touching on your question about >> the technology trends that these are the different aspects that that I'm seeing and um and nothing is nothing is holding still or or nothing is in stasis right now. Things are moving really really fast. >> Yeah. No, I think that's um that's interesting. I I I kind of want to talk a little bit more then about you know you seem to this is actually one of the better explanations I've heard of the different axises that some of these models work at because usually I'll ask questions about um that in general but you really have gotten really in depth with it so I appreciate it seems like those 17 years at Google um did some uh you have really have maybe a bit
of more technical backgrounds than some of the founders I've interviewed um really cool um kind of want to dive more into the product itself um that you guys have um so obviously with this type of knowledge uh that you have. I would love just like kind of a brief breakdown of what makes you guys tick, who are the people you would like to use your app and like what are some very common use cases that you uh find. >> Right. Right. Yeah. Yeah. So, um it's really a beautiful product. I feel the the great feedback that we get from our users. Um our retention is really high. People who use the product tend to stay on it. Um the way to think about it is is the following. The top apps that are out there on anybody's phone are going to be chat related applications,
messaging applications, whether it's iMessage on iOS, Google messages on Android, um WhatsApp, um Instagram, Telegram, you know, the these sorts of things. >> All the grams, >> all the grams. Continua appears just like any other participant in a chat. um you invite it. It's not in all of your group chats. It's only in the group chats that you want it to be in. So, so Dimmitri, you and I could have a DM. We could also have a group chat with continue participating in it. Okay. Now, what does it mean for Continu to participate in a group chat? So, what we've created is uh a quiet helper. Um, it answers when it thinks it could be useful and it stays quiet most of the time, but it listens really carefully. These are things that were really hard to build. I'll get into, you know, the use
cases and so forth in in one second. What I want to sort of um underscore here is that because this is a a somewhat technical podcast, all existing LLMs are instruction tuned for this kind of call and response. You know, you >> you first you do the pre-training, you got your base model, but then to make it um useful, you have to instruction tune it. So, uh they're all designed for single user, single agent, call and response. Um, that's exactly what you do not want in the context of a group chat. Uh, you would have, you know, Microsoft Clippy at that point. You don't want the agent responding every single time. You want it to have a sense of the conversation flow. You want it to have a sense of social etiquette or social discretion. Um, think about if you had a new employee working
for you and you were in a context of a team meeting. What would what would the characteristics that you would want of this new employee? You'd want them to um ask questions if they didn't understand what was going on. You would want them to offer helpful insights if they had insights to offer. You'd want them to be quiet if they didn't have anything. Uh, you know, you wouldn't want them barging in at every moment. And last but not least, you'd want them to listen really, really carefully because what they hear in today's meeting could help them do their job better tomorrow. Now, to actually implement that, we had to sort of break the LLM's brain. We don't we're not a thin wrapper over any one LLM. We have a bunch of fine tunes that really understand the intent. We um have designed continua so that
it if it feels that it could really be of help, it will say something even when not directly addressed. So these are the sorts of conversational dynamics that we have as people and we try to put into continue and we firmly believe that the path to AGI um has to go through solving social AI. In other words, if you don't solve social AI, um it would almost be like a human savant that um knew a lot of facts and could make a lot of inferences but um did not show up well, could not cooperate, could not collaborate uh with other people or other agents. >> Okay. Yeah. No, that that makes a lot of sense. I uh I I don't quite think I once again, you actually doing a really good job of explaining this in more detail than uh than I usually get. So,
I appreciate that um context. Um what would you say is kind of the the current level of like user base you're working with? I know that you um if I'm not wrong, you ended up getting a um uh you get a an $8 million raise uh to bring AI agents to group chat. So, congrats on that. What kind of um was that process like? And and what are the what's the early kind of feedback that you're seeing from people? We we raised actually right when I uh right after I left Google. So back in 2020 the Tech Crunch article is late. >> I I Yeah. Yeah. So some the article I'm not sure which one you're referring to but we we discussed um the actual product but the raise happened before some companies announced the raise before they have a product. Uh we decided you
know >> you wanted to have a product the first >> Yeah. Yeah. Exactly. So, um, think of, um, all the cases where, uh, okay, let's go through a few different cases. Say, uh, you have some roommates and you just want help keeping track of, well, who's doing the chores at a different, you know, on a different day, who's taking out the trash, who's doing this, who's doing that. for say a family where the parents um are coordinating on who's picking up the kids um when is a certain afterchool event happening um when's the gardener coming over so these different things it's almost like having a household PM when you have continu chat then there's cases where uh friends really interested in some sport basketball or whatever and they just want like updates on what's happening in that sport what players are going to what teams,
um, updates on game stats, that sort of thing. People interested in stocks and wanting updates on that kind of thing. Um, all sorts of tricky situations that come up where you want a um, another form of input or advice. So, an AI can help. And so we find in group chat AI can be a really good facilitator to helping people come to decisions uh helping people work through different problems. Um and yeah so if we think about uh some of the more specific use cases or facilities that continue provides there's this reminder functionality. So uh Continua can at certain times when you ask it tell the group chat when something has occurred. So I have a group chat with with people that you know discuss AI and so continue in that group chat and if it finds information relevant information that we care about for
a specific subdomain it will alert us but not only that it can create Google documents. So um in one of my chats once a week it keeps a Google document up to date with everything that's happened in a certain domain. So, think of it like um all the cases that you might use um a sort of single player mode LLM and right now you have to copy paste everything back and forth. It's like hey everybody um this is what Claude said and you know you send it over there. Instead, Continua comes to you and your group where you're already communicating and provides value in that space much more directly. >> Okay, that makes sense. Um, and what have you think what do you think kind of is uh I guess the thing that most specific like what what are are most users uh kind of
buzzing about in your product or what are you most buzzing about in your product that has been released kind of recently? Um or doesn't have to be recently it can just be >> I mean we we recently had um a nice shopping release and that's been great. So um because Continua is in these groups chats and it listens carefully, it gets to know people's interests, preferences, things that they might want. And so um we did a shopping integration with Amazon and and a few other um companies. So if uh an event like Black Friday, you know, just came up recently, if you want to get a gift for someone, you can discuss with Continua. Okay. Well, I want to get a gift for a certain person. What would be the right thing? Because Continua has this history of what's happened in these group chats, you
can do an amazing job of providing really personalized suggestions. And we've seen this both in the group chat and in the DM um scenario. The DM side, it's worth explaining how this works because I think this is one of the most interesting aspects of Continua. So the memory model of continue is such that information only flows down from superset groups to subset. So if I am in a group chat with you Dimmitri and with Continua and we're discussing uh I don't know AI or books about AI or something like that then um I can later DM Continua and say hey I want to get a gift for Dimmitri you know what should it be and continue would say well Dimmitri is really into agents and AI here's a book that was just published and here's a link to Amazon where where you can buy it
and the reason Why that works is because >> Continua was in the chat with you and me and so >> Oh, okay. >> when I'm in the DM with Continua, it would have that knowledge just like a human participant would. Interesting. So I'm trying to could you give me maybe an example another example use case of like how some people have interacted with it or how you would use it personally because obviously you do own product. >> I hope so. >> Yeah, obviously we do. Yeah. Um yeah, a few different ones. So um trip planning. This is a really big one. This is probably where the light bulb moment happened for this social AI product. Uh this was early this year. It was just getting to the point where we could test it and I went on a trip to Disneyland. I took my daughter
to Disneyland. It was her first time at Disneyland. And so, and also my sister was joining us. Um, and so my sister and I had a group chat with Continua and we used Continua to completely plan this trip. So, we would give Continua some like rough uh okay, like these are the dates that would work and so forth. Continua figured out um with our feedback and our input uh what restaurants to go to after we were done with Disneyland while we were at Disneyland, what rides to go to, in what order. It was a ginormous chat that was incredibly fruitful and helped us plan and get to a trip that was much richer than we could have done on uh ourselves with the time that we had. Uh what was even more interesting was when we were actually at Disneyland, our experience was also enriched
by Continua because there was all sorts of things that happened on the ground like one ride was closed and you know different uh exigencies that Continua helped us work around and like replplotted our itinerary on the fly. And then there were cases where I was at this part of the park, my sister was at the other. But because Continua meets us where we are in standard group chat, iMessage on her phone, Google messages on my phone, it was there and available to us to help us um get through the logistics of this trip. And at the very end, um Continua listed in a Google doc all the rides that we went on and next year what we should do because we missed some things on this year. So, this was like a really magical moment that made me realize, oh, we're definitely on to something.
Um, but yeah, we we use it all the time. We use it at the company. We go to lunch every Thursday and Continua helps us figure out, okay, what's the next restaurant that we're going to go to? What what did we go to before? What's everybody's meal preferences? So, you just think of it like a really really personalized helper in the context of all these things that we do as people, >> you know? That's that's pretty cool. Well, I uh um I I'm wondering then, you know, with with obviously this type of thing, you you you seem like you're just trying to make day-to-day interactions and social planning kind of easier. Um at least that's something that I'm noticing, whether it be an a long big thing or it's just a small thing. What is kind of the ultimate vision and goal for what Continua
is? Like how would you if if a large scale amount of people would to be using this? What would be the outcome that would be like your happy? We are we are building more and more agentic connections. We we so we have a Slack integration. It's internal not yet launched. It's working great. It has access to our code, our logs on GCP. When we're debugging something, it helpfully chimes in. Um we had this um incredible moment uh I guess maybe two weeks ago where um just out of nowhere like it was not even prompted in any way. it just preemptively or proactively said, "Hey guys, uh I just noticed that Gemini 3.0 dropped. Can you upgrade parts of me to use different parts of that?" So, [laughter] itself. But in any case, we we view this this type of Slack integration as something that can
be really really useful for different organizations. Um, but looking more broadly, we're really interested in how AI can participate in group settings. And as you've noted, right now we're focused on text messaging, but there's something much more much broader that we can do. Take for example this uh this video chat that we're having right now. There's been a lot of great AI out there to summarize meetings and provide like a really nice timeline of what happened. But what would it mean for an AI to be a quiet listener and then when we get stuck on something for example earlier in this conversation I was a little bit unsure about um what's the cost per token for Claude 4.5. >> Sure. Yeah. opus uh thinking. Well, Continua could just raise its hand and say like and excuse me, I I think it's such and such and
and here's the reference for it. Uh so that type of more immediate realtime interaction is is something that we're going after. >> That'd be pretty cool. I would I would like that very much. I I think that would be a incredible kind of um I don't I think it would be incredible showcase of the ability for AI to not only reason and understand when we are directly asking it something but contextually kind of chime in when there is a uh potential issue. So that's really cool. Um what are some of the features right now that um I would say that you you are working on maybe in the background you can talk about if not that's okay or what you're looking forward to in general in the next uh year to try to bring to people >> more personalized experiences that that's really the key
thing. Um really leveraging what existing the existing facilities of chat J chatbased platforms but then extending beyond to provide really bespoke interesting experiences that are really personalized to the people in the conversation. That's where I see things going. The the way I like to think about it is that users don't care about apps per se. They care more about the capabilities that apps provide. So the extent to which through this agentic technology, Continua can mix and match or remix uh the capabilities of existing applications. Um, that's really the um the goal that we're after. >> Interesting. Okay. Well, that's that's awesome. Um, just have a couple more little questions before we kind of close the episode out that are a little bit more about the the world of AI in general. So, given your history with um ondevice AI at Google. There's kind of a
big debate right now about um massive models in the cloud versus smaller private models running locally. Um my question would then be for businesses concerned about uh data sovereignty sovereignty. Yeah, data sovereignty >> and speed. Which side of history do you think kind of wins out? Will our like social AI eventually live entirely around our laptops? What what are your kind of thoughts there? >> Yeah, that's a good question. So um and we have to separate out a few things here. So there's um a whole field or concept around secure computation um and uh and how that could be provided in a resource effective way. Um so think about secure enclaves uh in the cloud or trusted execution environments on device. Uh in fact I think all the major cloud providers have something of this sort. GCP recently announced a secure computing um platform that
that works with Gemini. So think about data that's encrypted not only at rest and in transit but also during computation via encrypted memory. So you can get a really really high standard of uh protection and privacy by using such products. There's probably some premium for using them. I'm not I'm not quite sure. Um >> so you know if that's the goal um it sidesteps the ondevice versus server side question. Uh ondevice has a lot of really nice advantages. It works in disconnected mode. Now obviously there's only so much you can do in disconnected mode. Continua is a social product. So, if you're disconnected and can't, you know, talk to other people, then um that's probably not the most important thing. Latency when you are in a sort of um cell dead zone, it's nice to have more uh reliable latency as well when you can
do computation at the edge. Um, and yeah, there's a lot of information that's in a device like a phone that the more pruning of the data that you could do at the edge, the better. And so you could imagine all sorts of ways of anonymizing information at the edge. There's a whole literature around something called federated learning which is something that uh I saw up front at Google around doing decentralized training of models where you can think of it like almost like a map reduction where um in this case the map stage would be um you know the the back the gradient descent that's happening at the edges and then the reducer stage would be a sort of um aggregation of of the gradients uh in the cloud. with noise added at the client. So think about like differential privacy. So there's, you know, some
plausible deniability of any specific uh data point at the clients. Uh these are things that need to continue. There's really important reasons for um this sort of privacy work to to fully develop. Um I would say that the benefits of ondevice are just pretty huge. also hearken back to what I said earlier about the cost savings because of you're sort of like distributing the inference cost across people. Um but it's not yet at the point where it can replace um what's happening where you need a really really large model in the cloud and thankfully via technologies like secure computation um you don't have to make such a a strong privacy compromise anymore. >> Interesting. Okay. Yeah, that that okay, that makes sense. Um, I'm kind of just want to close things out with with one final question. Um, because uh we are kind of coming
up to the end of the hour here. What would you say outside of what you are doing at a personal, you know, basis, you know, you have your product which is which is awesome. What would you say is your best and favorite product that you use that's AI uh outside of the uh product you were creating? I would say Warp, the Warp Terminal. Um, >> Zack Lloyd is the is the CEO. He um he was at Google before. If you haven't played with Warp, do it. Uh, to me, it's the best um AI assisted coding environment. >> Um, I'll just say in one word uh one concept about it, but I I can wax poetic on this tool for a very long time. Think about how browsers a long time ago you could only enter a URL into the URL bar and then I don't
know maybe 15 years ago >> uh you could enter search terms you know so in Chrome you could you could search from the URL bar in the work terminal you could type ls you could type cat whatever you can also type um hey I want to work on a new feature and it needs to be structured in such and such way, please go at it. So in this one place, you can do both natural language and terminal commands and it just melds everything together in a really seamless fashion. >> Interesting. I haven't I hadn't had that answer yet, but I I appreciate that that uh insight. I think uh I'll have to check out Warp Terminal after this one. And anyone who is uh listening who's um interested in doing that, check out Warp Terminal. The last question I do have, which is obviously the
key to everybody, is where can you send people or where can everyone go in order to check out Continue? >> Um, yeah, just, you know, Google for Continua um Continu AI. Um, our website's continu.ai. Using our product is the easiest thing uh imaginable. There's a phone number and you just add it to a group chat and it only sees what happens in that group chat. Doesn't see anything else. Um we'd uh love any feedback. Um we're also hiring so uh we have a couple positions open so you can check out our careers page as well. But yeah, we're just so excited about this field of social AI and really um what it means for humanity to bring in this this new form of intelligence in a harmonious synergistic way into the context of our lives to help further human connection. >> Awesome. Well, I appreciate
that and I hope that you all go to continua.ai. That's continua.ai or Google it, look up David Petru on uh LinkedIn, check out what he's done in the past. It's really cool stuff and what he's doing now. With that being said, thank you guys so much for listening to this episode. Make sure to leave a like, subscribe, and we'll see you in the next one. Peace.