Building AI Enterprises with Codeligence's Juri Kuehn
About This Episode
Juri shares insights from his 30-year coding career and explains how Codeligence is building coding agents that go beyond code generation—automating tasks like code reviews, documentation, and error analysis to streamline workflows in enterprise development teams.
We dive into the rise of agentic coding, the evolution from basic tab completion to AI-powered sub-agents, and how tools like Claude Code and Lovable are enabling non-developers to build full applications.
Whether you're an experienced engineer or new to the AI tech stack, this episode offers a deep look into how AI coding agents are transforming how modern teams build software efficiently and at scale.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - The Rise Of Code Completion
1:02 - Introducing Yuri Kuhn From Codelligence
3:00 - From Developer To AI Founder
5:00 - The Birth Of AI Agents
9:02 - Claude And The Future Of Coding
13:00 - Evolution Of AI In Development
17:27 - Agentic Coding With Claude Code
21:50 - How Sub Agents Work
27:02 - Open Source Agents For Teams
33:00 - Developer Adoption Challenges
41:00 - Codelligence Platform Demo
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/D8TsmM7yeA
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
So when it started, you just had Jet GPT and they opened up the API so tools were popping up and people started implementing plugins for development environments for VS code and jet brains products and back then what was already all in all of those solution is so-called tap completion. So if you start writing code then the IDE the development environment would suggest a completion for that line or that variable or that statement. So you just start typing and then press tap and it would complete the rest for you. >> Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner and AI enthusiast. You're listening to the AI Agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of
work. Enjoy the show. Hello and welcome back to another episode of the AI Agents Podcast. In this episode, we have Yuri the founder of Codeelligence. How you doing, Yuri? >> Hi, Dit. Nice to be here. Yeah, it's a pleasure to have you on. I uh I I've had the pleasure of you know checking out everything you're doing at Codeelligence and I think it's it's really exciting in the age of AI agents to to kind of do you know coding now with with AI which kind of sounds from my perspective it's it's like code seep it's like aception you know it's like next thing you know we're going to have agents making code agents that make code agents again. So anyways, jokes aside, really excited to chat about the product. Um, tell us a little bit about how you got into Code Elegance and, uh, I
guess going back a little bit like your background before it even started. >> Yeah. Yeah. Gladly. Yeah, I think you're right about uh, uh, agents, writing agents that the writing agents. Yeah. Codeception. Yeah. Yeah. Yeah. I'm I'm on it. >> Yeah. Yeah. So, I'm Yuri. I'm 43 years old. I started coding 30 years ago. This year is my uh anniversary. With 13 I started coding and um then I worked for startups and um enterprises, German enterprises built software as a software developer, tech lead, data CTO, build software that processes a lot of requests a second or has high security standards. for example, uh systems that would manage uh entry to uh power plants and so and or um processing billions of euros for for an energy company. So I did couple of things. I was working for many companies and with a lot of developers.
So I love software developing development and uh uh yeah when AI kicked off when CHPT came out I understood that this will change everything and um got fully into it. So >> yeah, I mean it's interesting to me like uh the the world of AI, you know, in the last couple years has kind of made this natural shift. Like I was uh you know, just only familiar I wasn't even familiar with what an agent was until we started the podcast to be quite frank before that. You know, I did my research as best I could, but you know, it's really kind of trans just in my opinion like transcended any of my wildest expectations in the last couple years. like for you, you know, as as someone who's familiar with the development world, like how what do you feel about kind of the pace, you
know, things are going at? It's uh it has been scary I have to admit because uh when I sat down with my developer friends I told them this will change everything that the AI will be coding what will be doing our job in five years from now and we're on track actually we're on track and um so it was scary uh I tried to help companies with different products and I also So for myself as a as a founder uh it was really hard to keep up because we built a tool then a couple of months later OpenAI Chipd supported it. We built something else and then you know everything was moving so fast and when uh when Chipd came out nobody knew really how to how to apply it right. So it was new >> and I mean right now research is being done
on how AI works well what's happening inside. So even people who built the models still research how they work actually. So we we don't we do not fully understand what's happening inside and how to use it um properly. Yeah. safely and efficiently and how to apply it to to all the tasks in our businesses and everyday life. So we're still learning that and um as you mentioned AI agents I think it was about a year ago when they when the term popped up right before that uh we we were speaking about AI automations and used different terms and then AI agent nailed it I think and >> I I completely agree. Yeah, like AI agents was not uh in the purview of of my mind at all. Like we we started the podcast and to be quite frank, I made a joke there, but I
was serious. Like I am a I was a big proponent of AI. I had done a lot of content about many AI companies, many AI products. And when I was asked to make a podcast about the AI agents that are out there, I was like, what what is a what the heck is an AI agent? When I was asked the question, I'll be I'll be completely transparent. I don't think anybody knew. Like if anyone told you in when I was asked was like June of last year was like when Idkin first brought it up. I was like what is a I don't know what that is. Right. And it's it's amazing like we've gone from large language model text generation you know like GPT 3.5 that did a decent job of like in retrospect you know writing things to now agentic workflows and and not
autonomous fully agents but really really really strong um I would just say agents that had h that can do so many different things. First it was marketing and now you know just to get into what your product does like you're essentially a coding AI agent company so to speak right you bring AI agents to like co-pilot for code right >> yeah so so what we realized is that uh so what what you can observe that is that coding is a major thing for the model companies for the large language model companies they um they pursue to build uh models that can code very well that's on on a very that's a very high prior priority for them because they know that if they are able to create a model that is coding really really well they can apply it to the model itself so it
can start improving itself Right. So, so they can get ahead of everyone else. So, that's why there are pretty good coding tools and um we actually initially started also in the space like to compete with GitHub Copilot. >> Sure. Yeah. >> But but uh but wanted to bootstrap and um well, we that wasn't the best the best idea. Um but then we we focused on automations around the coding itself because in companies where where teams of developers working on one product um developers spend only onethird of their time or less actually working on the product. Everything else is overhead like they they need to make code reviews, write documentation, answer questions, do some research, um write change logs, fix some issues or even analyze some issues and and a lot of that AI can do very very well by now. And um coming back to
the pace statement. So um the AI development was really really fast in the last two years I would say. But I I see that um now the the models if you take a look at GPT for example, GPT3 was a sensation but it could do for example you could already build uh chat agents chat bots with it. >> It was all right for that >> immediately. Yeah. >> Yeah. But then um for more complex use cases, it wasn't accurate enough and it was too expensive and too slow. >> GPT GPT4 was incredible. It was a big jump in quality. So you could actually start building complex stuff on top of it. But now uh not that much happened like the the jumps the next jump to GPD5 is um >> I'll be honest that was disappointing the everyone was expecting that to be something incredible.
It was not that good >> as a joke. >> Yes. Yes. So, so what what they get really bad at is uh to understand human intent. So your prompting skills do not need to be as good as a year ago for example. So the models understand better what you actually want because they had a lot of training data collected from CHPT. So they know okay if the user enters this then actually he wants something else the model they trained the model in a way that it understands intents better. It got a lot cheaper. It got faster. The tool calling is more reliable and a year ago they could you could provide like six seven tools and if you provide more then the model would confuse them and uh right now you the limits gone up. So the utility of the model of the models is
now a lot higher but the let's say intelligence is more or less like it's it's a lot better of course than a year ago but the jump is not like from uh GPT3 to GPD4 and um now what what I see is that this model development uh is stalling it's not Um it's not the main thing that everyone is focusing on right now but they're going to into the implementation phase and this is and this is with the agents. So they provide integrations like JGPT you can uh use Zapia you can integrate with Zap Zapia with Google uh Google mail right out of CHPT. So they um yeah now they're going into the businesses and providing everything to implement the use cases. Yeah. You know, I it's interesting you mentioned something about like the you you mentioned a good point about how you add so
many tools and then they'd start getting confused and there's like too much stuff like could you speak to just from a coding perspective, right, as the products have continued to expand and get better, larger context windows, larger reasoning capabilities. What is it like in the coding world? and and maybe explain that uh you know trajectory a little bit more because everyone is probably at the consumer level decently aware of you know you got XYZ product whether it be Claude CHBT etc they understand things better they're able to do more complex uh requests when you're asking it for things what was that like same progression like for code uh AI. >> Yeah, >> cuz I don't think I actually know this progression very well. I just know that like everybody uses clawed code now or whatever. >> Yeah. >> So, >> absolutely. So, when it started,
you just had chat GPT and they opened up the API. So tools were popping up and people started environments for VS code and uh jet brains products and uh back then um what was already all in in all of those solution is so-called tap completion. So if you start writing code then the IDE the development environment would suggest a completion for that line or that variable or that statement. So you just start typing and then press tap and it would complete the rest for you. and uh GitHub copilot even before the CHP release had already an autocomp completion because they they had their own model that uh wasn't as strong as CH GPT but still they were specialized on code so they you could as a developer you could start writing code and it suggest suggest a whole block >> of code for you it
wouldn't just it wouldn't just complete either a variable name or uh method name. So a function name that you would call but it would complete a whole algorithm. For example, uh you want to iterate over your customer list and then just um get the names in a certain format and then you just start typing and maybe write a comment above it and then just press tap and it will complete the whole block for you which is a big thing because you don't have to write everything by hand. But uh not everyone was using it because it wasn't also like super great right so um but then when came out they open up opened up their API and a lot of startups and projects popped up and they built on the API of open AI and started to implement also this kind of solution for developers.
So developers can start typing and it helps them to it try to understand the intent of the developer and complete a whole section of code. So in best case you just start typing and then press tap complete tap complete. So uh you say you would save a lot of time and um uh now when uh GitHub uh partnered with OpenAI they're now using their model and uh it's working a lot better. Um although you know Kursza is uh yeah they they do a better job but if you if you look at the at the solutions you have like uh the copilot github copilot you have cursor you have windsurf you have cloth code those that yield the best results use the cloth family models Right. Because cursor, if I'm not mistaken, defaults the cloud. >> Absolutely. Yes. And we we're constantly trying if there's a
better model, for example, there are models um Google is really fast. So, it's a bit annoying if you're programming and you need you need to wait until the model finishes. So, >> absolutely. Yes. >> So, I'm looking forward when they become uh like really really fast. Um but um none can match with cloth code and >> the standard at the moment is the Opus 4.1 model, right? >> Yes. Yes. >> Okay. >> Yes. But um also even the set model um it has this thinking mode and it's also really really powerful. And so there was a a major shift um just to complete this one. There was then a major shift from this step completion to agentic coding which means that clot code offers aentic coding. You do not edit code in the editor anymore. So you don't you do not type the code yourself.
you just explain what you want to achieve and the AI in this case cloud code would uh would write the code. So this is the next step of development agentic coding and here we are >> and you know I I guess just to to kind of speak to it for the I I'll ask one more question about practically how it works uh and then two I'd love to kind of get into for the average user kind of what it looks like. So Claude Code released sub agents recently. Is that correct? >> Yes. >> Could you speak a little bit to like what that process and situation is like? Because I'm, you know, from an agentic standpoint, I'm curious what your thoughts are on it and if you could you from a technical standpoint can explain that to us a little bit better. >> Yeah. So
uh we have different directions of advancements in the AI space. >> So one is like the general intelligence. How well can the model understand what we're aiming at and then produce uh accurate accurate response and um having a large context window. And um a new thing is with a gentic AI, how long can a model or how long can an AI agent or AI system perform on its own? And um for example, clo code is is an agentic coding system. So you can type in a specification for a whole application and then tell it to use sub agents and then it will make a plan on how to implement uh the system and then spawn sub agents. So concurrently multiple agents will work on building your application. It doesn't work perfectly. So um you need to carefully describe your task in a way so so
it can split it in a way that the agents do not interfere with each other. So but but I think that will get a lot better soon. And um so cloud code is able to turn a specification into full application and uh there are also another diff uh another solutions like uh lovable for example you heard maybe of lovable or >> absolutely yeah I have absolutely yeah I've used it. It's good >> and it's really really good. So yeah, so for example, Lovable is really good at building UIs and uh Replet just released a new coding agent that is capable of building um also whole applications and they um they advertise with 200 minutes of autonomous coding so that the agent uh can can pro produce code for 200 and it's straight without Yeah. without needing human inter interference or going off track. If it
starts hallucinating and then building something crazy, then uh yeah, that's not what we want. So um so cloth code is capable of uh coding for a long time using those sub aents and it has a lot of features that a developer can use to make it work for a long time and implement the complex specification. And there are also those um those specialized coding agents like Lovable and and Replet and Vzero and they uh for example Lovable is really good at uh building UIs. So you can you can build whole applications yourself and um I walked a couple of people through it that are not developers at all and they've they've built complete websites with with login and data storage. So so a lot of developer tasks can be done today by someone who is not a developer at all. I mean that that that
to me is the most incredible thing about what's happened the last couple months. Like I I've put together some stuff with just like Lovable and Claude Code um and like a GitHub sync between the two and just fixing stuff in repositories. Like there there are even third party tools out there right now. It's like this one in beta that I was testing out where you can basically have a remote terminal so you can like run up multiple instances of cloud code. It's got a nice interface and it's just it's crazy. I don't know how to code, but I was able to put something together that was functional. And you know, I guess I'm just curious for you as as a founder of of someone in this industry, like what are you guys doing so from a unique standpoint to really stand out, right? And and
are so and how are you maybe utilizing these different models and and your own product and stuff like that? So I think we're uh at a time where it's not really clear how AI agents will be used in uh in software development teams like in teams. Um as we two years ago didn't really know how to use AI for coding. I think some people already had this idea that there will be agentic coding and you just put in a specification and you would have a terminal. You will not use your development environment anymore or uh people are building now new development environments that just you know uh monitor the the changes of the AI. So um some things are we we experiment with different solutions and then we see what what of it works and I think what's special about us is that that I
bring quite some experience from the software industry in this software industry and u um had the opportunity to work for so many companies and um see that they uh basically have all the same struggles. And I see that um a lot of solutions are being made. But my learning from the from the last two years is that code the value of code is going to zero because you just can say what you need and then the AI will produce it. So I think what's unique about us that we're open source and um our agents integrate into into customers diverse environments because every team produces code but one use GitHub or the GitLab and Gyra and they have a little bit different workflows. They work in agile way but but uh they have different setups. Some have >> big de devops operations others don't. And um
if you get the if you get the developers to work faster the drive the productivity of the developers higher then you also need to look after the testers and the product owners and devops because they get pressure from the developers because if just one part gets more productive then you need to keep up uh with the rest of the uh team. And I think that um what what we offer is with our agents with our open-source agents is um what teams need to know to complement the code generation and code editing in a team >> because what we offer is >> agents that are do the grind work around the code like code reviews, change logic generation, uh issue and error lock analysis. So things around the actual building of software >> and where you seeing other companies kind of lack in that regard, right?
It seems like obviously you're you're like a solution where where others are missing it. So like how why are they, you know, whether it be GitHub, Copilot, stuff like that. Why are they not doing this type of stuff >> already? Uh yeah. Yeah. because it's so new and um we uh we have our operation runs our sales operation runs on on LinkedIn. So uh and uh we we try to speak to a lot of CTOs >> and um so what I see is um if you have a larger team like 10 20 30 40 developers then things move slowly. If you introduce for example GitHub copilot like many companies use GitHub copilot because it's uh you can you can get it from Microsoft and uh in terms of security and data privacy things are easier if you work with Microsoft than uh with entropic or
open AI directly and um so the adoption is very low and you might think that developers would embrace AI for development and would you know would kill it but it's not the case. So most not some most developers struggle >> yeah struggle to get really productive with it they they are so used >> really >> that's actually kind of shocking. >> Yeah. So you're basically making it that just to to restate what you're saying, people who don't know how to code might actually be more productive with these tools than people who >> do know how to code. >> Really? No. Shoot. That's very counterintuitive to me. Talk more about that. >> Yes. Because um so if you if you think about the uh regular developer day, yeah, you you go into office, chitchat a little bit, get your coffee, then uh work a bit, then
you have your daily in in software development, in uh the uh if you have an agile agile project, which most people do uh nowadays, Then you have your daily, then you work on your code, then you answer some questions. So So you go to work and then you you have your your routine, your habits, right? You you go there and you do things the way you've done them the last couple of years ago. And um now your CTO comes in and says or maybe one of the developers says hey we really should start using GitHub copilot or something and the CTO for example then goes okay we need to check with legal and so on. So okay everything's clarified GitHub copilot licenses are bought >> so developers can start using it but then only few and that I don't really why don't know why only
few companies really uh buy proper training for their their people so they assume yeah they're developers they will figure it out but you read online. >> You would also assume no. >> No, that you know this is actually a fair point. This is a very common like business mispractice though that happens, right? Like people make the assumption that a certain person may be able to do something because they're technical enough or whatever it is. But it's like no, you got to you got to implement how this is going to be used. You can't just assume they're going to get it. And and so what happens is that developers and developers are usually quite under pressure. So uh the product owners and whoever runs the department or the company is always asking when when it when will it be done and then you have the bugs
coming in and you need to fix them without breaking your schedule. So usually developers especially in agile where you have to report every day and you have your sprints every two weeks you need to deliver. So they they aren't a lot of pre pressure actually and uh they don't have time to get really into it and after full day of coding and messing with code actually many developers when they come home they don't really want to touch it again. So no >> not not all like there are such and such develop developers but then maybe they watch a video or try it a little bit but actually for me uh it took me about two to three months fulltime uh working with AI until I really understood how it works and how I can get out something out of it something valuable. So it and
and I thought it would be easy but actually if you start doing it and it's not producing what what you like and you can read all you can read all over the internet about it. A lot of people are saying it's overhyped it's just not working and they they have typed in something and then uh it didn't come out as expected and so they blame the AI for it. But um it's a tool and every tool uh like has if you if you if you buy a saw and want to to bring down a tree then there's a way to use it otherwise it won't work well. Uh so it's also a tool and it doesn't have all the buttons and the inputs but you still need to know how to use it and I think that many underestimate and so things are going very
slowly and people are just still struggling with introducing a co-pilot and they're not really thinking about the next step you know to automate the things around And I think this is coming now. >> Yeah. And I mean that's a very fair point and it's a common thing across all verticals and industries, right? And maybe people just have this same blanket assumption because it's the smartest of the AI. I'm not sure um with you know content generation or research or whatever it is. If you look up questions or if you sorry write questions and prompts incorrectly into any of these models, smarter they're going to get is going to help this a little bit, but there's still going to be a level of prompt engineering that's really needed that people don't get. Like what are some of the main drawbacks you're seeing? Because I'm I've read
up on it a decent minute. It seems like if you don't provide a very detailed spec sheet, things can go wrong pretty quickly for a lot of this uh coding. Um what are the drawbacks you say? >> Well, I would I would say like like what are some of the examples of you know people using it and it going wrong right? Like for example I've heard that if you don't provide it with very detailed like spec sheets that can go not in the right direction and then that's why people are like why didn't it code right? It's like well you didn't really tell it what to do. You sort of told it what to do and it wasn't that detailed. >> Yeah. Yeah. So, so the model doesn't know anything about you. It doesn't know anything about it. >> So, or or what you're dealing
with. So I think the hardest hard thing is to um to treat a model like a complete stranger because people are typing in something like okay uh please write me uh please write me a piece of code to an algorithm for something yeah for sorting or whatever and then and then what it spits out doesn't uh match their expectation But they don't understand that the AI model is trained on so much code and it knows so many variations and it's trained to always reply. So it won't tell you uh I don't know how to do it or I don't know what what you mean by that. It's not trained to to ask questions but it's trained to generate responses. So then people enter something and then they're a little bit disappointed. But it's exactly as you said uh you need to provide context. You need
in the best case to tell something about yourself about the company. Hey, I'm a developer at Codeelligence and we're building AI agents for non-coding tasks and um I'm currently working on this agent. So you you explain it you can explain it to the model as you would explain to a person >> and then uh and then then ask your request as put your request into it. Yeah, that that that's key I think across all industry, right? like there's no there's no replacement for having conversation with people and you know people is one thing but in essence these AI agents are conversational and that means you have to have a conversation with it in order for it to have the context right and if you walked up to somebody was like I want a well functioning table like component right for like whatever product you're building
and then they responded with like you you know, they whipped out like a physical table that you like put food on. You'd be like, "Oh, you're right. I didn't tell you that. I meant like for this uh notion clone tool, I would like a table that functions the way that notion does with filters and sorting and, you know, and adding new rows and deleting and etc." Yeah. So, that's the the funny thing. People just make assumptions like that. So, yeah. And and I and and I think um that's the experience part. If you work with AI models for a long time, then you start to understand what they understand and then you start to understand what context you need to provide, right? And I think one one thing is worth mentioning uh also the rise of context engineering. So um you in the beginning of
this year you could download a ton of prompt engineering guides but now with AI agents you have context engineering. So you not only need to understand the model but AI agents uh usually have workflows. So they automate something and they have multiple steps to achieve something. And for each step you need a different context because at every step the AI starts at zero. Every step of the workflow it starts at zero. So you need to explain everything again. Of course you provide then the history and there are a lot of techniques on providing a proper context but this is like a new uh also a new skill set context engineering. >> Yeah it's very important. I mean like there's there's a bunch of you know tools that are out there right now and they're great but I've used uh you know relevance. Have you heard
of that one? Relevance AI. >> Yeah. Yeah. >> Yeah. That one's awesome, but really makes good like improvements to it is honestly I don't know if you've you've seen it, but like the context windows are getting bigger and the ability for these agents to search up documents and stuff. So I'll add knowledge bases to it and it'll and then I can give specific since it has a longer like runtime capability. I'll tell it to okay for this portion of the task look at this knowledge base get your answer from there. etc., etc. And the more context you have, the better it becomes. And you know, obviously to talk about you guys for a second because we are get getting close to the end of the episode. We really quite haven't we talked about the industry, which is really great. So everyone knows you know what
you're talking about. If you had to give, you know, a uh shout out to what you feel like is the coolest type of stuff you're doing at Codeelligence. I know you mentioned the, you know, the things that you can do for uh teams and solving problems. Just tell us a little bit more about what you feel like is the number one thing you'd like everyone to take away that your company can do and that you'd like them to check out. >> Maybe I can share my screen for a second. >> If you I would. I think that'd be awesome. Yeah. So let me know if you can see it. >> Yes sir. >> Yeah. So so it's very simple actually. So uh the agents are open source. So you can uh go on uh on the website dev minus agents.ai and we currently have a
couple of use cases we have implemented. Of course, every team is a little bit different, but there are a couple of tasks that are uh there in every team. And um if one of those use cases is something that you spend a lot of time with, you can just go here, you uh start a setup wizard, you can configure your avatar and just say what what of type of infrastructure you have. Do you use GitHub, GitLab, Azure DevOps? what model are you using? OpenAI, Azure, Bedrock, whatever it is. And then the uh the agent will provide you the configuration and uh the Docker command to run it. So it's really easy to start with. And then um you have for example if you use Slack and use the agent in Slack then you can you can just say um uh provide impact analysis on PR3
and because it's connected to our GitLab and uh and uh also has access to the code base it's able to pro yeah to provide those uh functions and uh yeah give the analysis and then >> then I can ask it to generate change logs or uh ask it questions about the codebase. So this can be integrated into workflows easily but uh if you integrate it into Slack or teams uh we saw that the adoption is really good. So um yeah for example here you have you have for a change you get testing recommendations what what was changed in the code and then what needs to be tested to make sure that nothing is broken. Yeah. So uh >> you know I really like that it's in Slack by the way. >> Yeah. We we saw that people are in Slack or Teams anyways and >>
yeah that's very smart. >> Yeah, if you use it from there it's transparent. You don't have a lot of issues with you know some abuse and uh yeah >> no I like that a lot. I think that's a very good, you know, Slack apps and stuff like that. And I and I was going to ask this question, but I think, you know, it pretty much proves out what my comment question was going to be. I always like to ask companies like yours like, well, how are you going to make sure that this is easy to implement into somebody's workflow, right? A lot of people will end up making their own product and like kudos to them, but then they'll they'll have to they end up putting it somewhere where it's like its own thing. and very naturally by putting it in Slack I feel like
it makes the ecosystem a lot easier to digest you know so that that is that's a great great call on your part >> yes yes I think so too and and also giving it an approachable identity also we saw that it's also yeah >> you can you can call it whatever you like so in this case it's Betty and uh yeah and uh the adoption it helps with the adoption Well, yeah, this is incredible. So, everyone just, you know, last thing I'll say before we kind of close it out is thank you again, Yuri, for being on the episode. And please, everyone go to codeelligence.com. It's spelled code lince.com. Thank you so much for uh being on the show today, Yuri. >> Thanks for having me, Demetri. It was a pleasure. >> Was a pleasure to you as well. Thank you so much everyone for
listening to this episode, watching it. Please make sure to leave a like, comment down below, and also don't forget to subscribe on all the different podcast platforms. We'll see you in the next one. Peace. [Music]