Home All Episodes About Official Page Subscribe on YouTube
Episode 118 Jan 19, 2026 38:18 4.4K views

Why Deterministic AI Beats GenAI in Cybersecurity with Ian Amit

About This Episode

Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast

In this episode of the AI Agents Podcast, we speak with Ian Amit, co-founder and CEO of Gomboc.ai, about how AI agents are revolutionizing cybersecurity and infrastructure management.

Drawing on his extensive background as both an attacker and defender in the cybersecurity space, Ian discusses how deterministic AI can identify and fix vulnerabilities in cloud environments, optimize performance, and reduce engineering toil at the source—right within the development environment.

Ian explains how Gomboc.ai shifts the focus from merely identifying problems to actually fixing them in real-time, embedded within the coding process.

We also explore why generative AI often falls short in secure engineering applications and how Gomboc’s deterministic approach brings speed, accuracy, and trust to software development.

Whether you're a DevOps professional, security engineer, or tech leader, this episode offers key insights into the future of AI-powered development and cybersecurity.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - Expanding Fixes Beyond Security
1:03 - Introducing Ian Amit and Gombok AI
2:00 - How Cybersecurity Led to AI Innovation
5:51 - Problem Solving Through AI in Security
10:02 - Early Motivation Behind Gombok AI
16:01 - Shifting Left in Code Fixing
21:00 - Fixing Code Problems at the Source
28:49 - Problems With AI-Generated Code
33:08 - Real-World Use Cases in Cloud Security
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://www.jotform.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Transcript

So that was the first understanding. We expanded beyond security and we can fix things that are not just security related. We can fix things that are related to SR to software reliability engineering that are we can fix things that are related to cost and performance optimization. So that's one. And the second one is we're no longer just looking at existing code and saying there you go I fixed it for you. We decided to shift all the way left. And that that's kind of a security paradigm. shifting left, meaning getting as close to the development process as possible. And today we're at the development environment itself. And we're there sort of as a boot angel on the shoulder of that engineer who's writing the code. Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI

agents podcast brought to you by Jot Form and featuring our very own CEO and founder, Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello and welcome back to another episode of the AI Agents podcast. In this episode, we have the co-founder and CEO of Gambuk AI, Ian Amit. How you doing today, Ian? >> Pretty good, Demetri. How about yourself? >> I'm living the dream. I have to be. It's the only way to live. I appreciate you for making the time today. Uh I think it's a really um it's a really great uh time to be in the world of AI talking with cool people like yourself. And you know, just to kind of kick things off, you know, you you have a pretty big background in the security space from

my understanding. Uh and you know, now you're kind of in this realm of AI. Could you um kind of tell us how you made it into this world of AI and AI agents? >> Yeah, absolutely. And I love the way that you're calling me old. It's a pretty big background. It's pretty [laughter] >> the larger That's fair. It is a logical entailment. I apologize. >> I apologize. >> No, I'm just kidding. Uh so again, to make a long story short, I I I would like to start by saying that AI has been around forever. Yeah, it's it's not really new. It's just the fact that in the past three five years, we've been focusing really on those new companies that operate within, you know, this new hype cycle of generative AI that brought it to attention. But as as you noted, I'm I've been in

the security industry for almost 30 years now, both as a practitioner. kind of grew through the ranks of of hacking and pentesting and later on consulting and running consulting teams um as well as working in corporate where I defended organizations and eventually end up at at leadership leadership positions such as a chief security officer for for a few public companies here. But my exposure to AI has been has been out there for for quite a bit. And I think that specifically for cyber security, it's a field that generously benefits from the kinds of automation and uh and advantages that AI can bring to it since we're dealing with so much information, disparate data sources, you know, unstructured data and it all falls on the shoulders of the practitioners who are expected to, you know, go through all of that and make very quick decisions. sometimes

very critical decision. So my exposure my personal exposure to it has been out there for for quite a bit and uh in this last iteration where I'm I've created Godwalk AI is basically my attempt to apply those AI algorithms and and models to a specific area in that space. >> Yeah. Interesting. So I um I have not had a lot of experience in the security space. Um, I don't know uh too much about it, honestly. Um, I think it's a it's one of those, you know, areas that I I'd love to learn a little bit more on. What do you think is kind of some of the things that you learned in that area that kind of got you prepared best to uh get into the the field you're in now with with what you're doing at Gambach? >> That's a great question. U my

personal experience has been on on both sides of the security cyber security kind of field. both as an attacker who is tasked with challenging and finding flaws and understanding you know a targets kind of you know IT layout and and capabilities and finding ways in as well as a defender who's designed to counter those attack elements who's designed who's who's expected to build defenses and be able to deploy deploy controls and tools that would alert the defender from malicious activity on malicious activity and and allow that defender to take action and circumvent those kind of attacks. So having been in kind of both sides of the the practice really prepares you to to understand how to to utilize AI from an attacker perspective. And I kind of alluded to it at the beginning, you know, the the fact that you have to be knowledgeable across

different multiple domains and both technological as well as kind of business or process as well as the requirement to to be able to ingest and process unstructured data from multiple sources. It could be, you know, doing some open- source reconnaissance and intelligence on someone, trying to understand where did they go to school, what did they study, you know, where they're from, like we we discussed initially, what languages do they speak, you know, find a way to abuse that or to use that to your advantage when you're communicating with them as well as the technical sides. You know, what's the technology that's being used? what languages are are utilized in their in their environment all the way down to what sort of operating systems and applications are are in use and have a familiarity with those to the expand extent that you know that certain versions

or certain capabilities or or platforms have vulnerabilities and then be able to string all those together and execute an attack that takes advantage of all of those different vulnerabilities and and information and gets to that you know crown jewel so to speak of the organization. So again we're we're talking about multiple domains, different data, different technologies. Historically what security has been using again is is sort of automated tools and uh and means for collecting and classifying and and enabling a practitioner to master or or to be able to manage that information. and and the bulk of the thinking and processing and kind of finding the the right narrative through all of those data domains is left for the practitioner. And that is something that again AI provides a huge potential for someone to you know to get rid of a lot of those human tasks,

automate those and then only leave the human only leave the the practitioner with you know all the work products of understanding and correlating and and uh collating that data and then leaving it for the practitioner to to use. And again that works on both sides. I've described sort of an an attacker perspective of looking at something or or trying to figure out how do I break in, how do I abuse a certain system. The same level of thinking is applied on the defensive side. So again, a lot of opportunities to to disseminate, process, analyze bulks of data from a defensive perspective. This grows exponentially. a defender. You know, one of the biggest tasks for a cyber security practitioner on, you know, in in the corporate world is to be able to process terabytes of log data from firewalls and servers and applications and users and

login systems. And so just think about the the number of events and the number of software events that happen on a daily basis in in a typical kind of corporate environment. All of that is being fed to a a you know a centralized kind of processing system where the cyber security professionals defenders in that organizations are basically tasked with sifting through and identify you know going through all that data and identifying the anomalies identifying patterns and that's where AI again is is very useful in the defensive cyber security stope finding those patterns that a human would have, you know, a huge trouble tr trouble addressing. We again we're talking terabytes and pabytes of data but you task it to a a model an AI model that's being trained to to look for anomalies to look for patterns to look for things that individually might not

be problematic but when they occur on an you know a longer time span going back to being able to process reams of data it might pose the problem. So that's again that that really that his history or at least my personal history practicing cyber security both as an attacker as a defender as a leader as a as an executive who's uh who's tasked with defending all those organizations and really kind of hone in my my personal skills and understanding of how and where to apply AI models most effectively. >> H yeah know it's um it's that's interesting. I I've noticed a trend definitely with a couple of different things like there's a lot of people who are working in um output based AI is how I'd like to word it. Um and the synthesization or the analysis of uh the kind of the um other

half of it seems to be a trend of uh new products, right? The first part of it was what's being outputed and in this case I guess people attempting to do nefarious things and then there's the uh protection on your side. So understanding the outputs um helps you understand the um synthesization I guess and the inputs or the you know reflection to that. The first instance I'd seen of this was there was an HR company that came on the show that we interviewed and they um they were pretty you know forward about how there's a lot of problems in resumes uh being mass sent to companies now and there's like no AI to synthesize the other half of it. So you know the ingestion and uh anal analysis is what matters. Interesting. So another follow-up question I had just more on the development side. I

saw that I think it was back in 2023 um Gambach was AI was a top four finalist at the black hat startup spotlight shortly after launching. Um can you kind of take us back to the early days of development? Was there like a specific frustration that you faced um related to you know unresolved tickets? like what what kind of um was it like in the early days and what was really top of mind for you then? Yeah, it it really goes back to the reason why I started Gumbok to to begin with that that was on the heels of my last role as as a chief security officer where I realized that I have you know I've managed to build a pretty solid kind of you know cyber practice security practice in in the corporate that I used to work with and uh I had

visibility I could know what's wrong in my cloud environments I could identify and monies and activities that are problematic across my my IT infrastructure and and the physical kind of presence across the the different offices we had around the world. But when it came to addressing issues, especially in the cloud environments, especially in those environments that are much more ephemeral that are continuously changing, when it came down to fixing those things, that's where I saw a problem. And and going back to your point about, you know, issues with development, finding something that's wrong and calling a baby ugly is easy. That's what we've been doing for for decades in cyber security. Addressing and being able to to fix those issues systematically from the root cause is the biggest problem. And that's why again I I decided to kind of tackle that space, not necessarily of

building another product that's going to tell you, hey, this is wrong. Go fix it. But because again, we we all know that. But really coming in as sort of your your DevOps agent that takes you from detection all the way through remediation, all the way through giving you the work product. Going back to your point about the outputs of AI, my vision was to create that agent that takes the burden off of the engineers who are bombarded with multiple alerts from security products who are inundated by changes in regulatory compliance and shifting environments and suddenly AWS comes up, you know, especially after reinvent last uh in the past couple of weeks with a bevy of of new services and updated services. All of that basically funnels down to a DevOps engineer that has to deal with that environment and come up with a correct fix

or an update for an environment or build environments that are compliant with all of those changes. Uh so it was really that kind of motive that drove me to start Gmach realizing that this is where the friction is at. It's we don't have a find problem. We have a fixed problem in the corporate world and uh in that particular space I realized even the most advanced you know generative AI models could not fix. They just did not have the level of accuracy and contextual ability to say, "Here's the work product. Here is the fix for you, Mr. DevOps Engineer. I did all the kind of heavy lifting for you. Go ahead and apply this. Go ahead and review this. I've saved you all that toil and and work and research and checking references and and looking up documentation and understanding what is it that the

security alert wants and why does it, you know, why it needs to be encrypted and why it needs to be protected. All of that is done for you fully explained, fully reasoned. There you go. Implement the fix. >> Yeah. No, that's that's, you know, it's it's really uh interesting to to learn about these these new areas. I feel like it teaches me more about the the landscape every single time. And um um I always love hearing the stories about how people get initially um kind of in into the uh venture they're doing. How many people currently are at uh Gambach? >> So we're currently 16 employees at Gabbok. I like to keep things lean and and effective and efficient mostly based out of our office here in uh in New York City. >> So you have so a lot of people are um it's a

constantly evolving uh thing. I asked you at the beginning kind of like what was it like now getting to now the last six months what's your tag tagline you say for like this is what we do and how has that kind of changed since the beginning >> right no that's a that's a good one so the tagline really remains it's all about the fix >> as I said before it's not about finding problems about it's about fixing them but what changed over the past six months is a a better understanding that we should go as far back into the source of the problem as possible. When we started, we were all about coming in and saying, "All right, here's the fix for you." You know, I see your code. I see your environment. I understand what the requirements are from a security perspective, and I'm

going to to provide that fix for you ad hoc for your existing environment to implement. So we've taken that and the the journey that we went through over the past two and a half three years was first of all to understand that this is not just a security problem. This is not just a security problem. This is an engineering problem. Security is one of those elements like I mentioned before that that finds themselves at the the receiving end at the DevOps engineers kind of task list of things to do to their environment. So that was the first understanding. We expanded beyond security and we can fix things that are not just security related. We can fix things that are related to SR to software reliability engineering that are we can fix things that are related to cost and performance optimization. So that's one. And the

second one is we're no longer just fixing things ad hoc. We're no longer just looking at existing code and saying there you go I fixed it for you. we decided to shift all the way left and that that's kind of a security paradigm shifting left meaning getting as close to the development process as possible and today we're at the IDE we're at the development environment itself and and we're there sort of as a you know as the good angel on the shoulder of that engineer who's writing the code. So the second that someone starts building a a cloud environment, starts putting out out an architecture, before they even deploy the first version of it, we're there to accompany them and and apply fixes, apply and and kind of bring those different policies that I've mentioned to life almost at real time at the at the

editor itself, at the IDE. So when I'm spinning up, when I'm just writing or evening prompting an AI like a cursor or or a co-pilot to start build or or add another element to my environment, GMok is there to apply the fixes to that code before it even makes it to, you know, before you even save it for the first time. So that's what we've been, you know, that's the the the kind of transformation of that tagline of it's all about the fix, but we've moved it as early as possible to prevent any kind of rework because the second that an engineer puts something out, the second that you you code something and you deploy it, the last thing you want to do is go back and fix it or or you know, change it. If it's working, it's working. That's it. You want to

forget it and move on to the next thing. So being able to address things and provide those fixes as early as when the code is being generated, that's been what we've been kind of focusing on on the in the past six months. >> Interesting. Yeah. Um what do you think you're probably going to be focusing on in the in the near future then? >> It's it's really a combination of of getting at the hands of more and more engineers. And we've recently, you know, released a a a full fully free community edition where an engineer, even if the, you know, let's say the company hasn't bought Goblock yet as as a platform, they can still use the community edition to help them fix issues in their code for free >> just as part of the community. What they don't get is kind of, you know,

it's the enterprise features. is the ability to customize the policies, the ability to generate reports or or have more advanced integrations into kind of a a full enterprise environment. Yeah, but that's one of those areas that we're really focused on because we do believe that clean coding and being able to produce clean environments is the key for optimizing for that you know the boost of productivity that all those GI tools have been promising and underdelivering. So that's that's one area. And the second one is is truly, you know, more the enterprise side on our our customer side is being able to to do what they need is understanding that they're faced with a set of different policies, the different requirements that they constantly need to adapt to and constantly need to see how do they implement that in their environment. So, GMAK is there to

ingest all those requirements, all those changes, all those policies and frameworks and and compliance reports and come up with again not an alert that says this is wrong, this is out of compliance, but come up with here's a fix for you. I saved you the work and it's fully reasoned. It's fully contextual. All you have to do is now implement it. These are kind of the two different tracks that we're focused on in the next couple of quarters. And and it's really about minimizing the friction. It's about being able to say here's an arbitrary set of like acne's internal DevOps policies on how to deploy to the cloud. It might be affected by multiple compliance requirements like financials like you know healthcare whatever it is. But at the end of the day here is the policy that I want to implement. we can take that

and simply come up with the war product. Yeah. >> What do you think is like the the hardest thing when it comes to uh both um implementation of what you're doing and uh explanation to your uh customers, >> right? It's it's a it's sort of combined together. It's uh and I alluded to it before by saying there's there's a huge hype cycle right now. Anything that's got AI in it is like, "Oh my god, yes, this is gonna solve the world." But the bottom line is explaining to our customers and and and prospects that we're not just another iteration or or a fancy interface to one of the common generative AI models. And and that's the biggest problem because at some point the industry started hurting itself by overpromising by saying this is going to you know 10x the amount of code that you can

generate which is again it's not a lie but no one talks about the other side of the you know 10x code which is you get 10x the bugs and the same time to to resolve those bugs and on top on top of those bugs You get hallucinations. You get inaccuracies. You need to continuously kind of I call it wrestle the pig which is [laughter] create creative prompts to to go around some of the limitations and to make it more accurate and sort of you know I've seen sometimes people feel like they have to convince their AI model to do something to write code to not just put a comment that says oh this is trivial like fill in the rest of the code. was like, "No, this is I'm literally paying you to do this." Um, so it's it's really making them understand that we're

not another generative AI model. And to explain what determinism is, we're a deterministic AI model. for the same set of inputs and environment, we will produce the same output. Unlike a generative model which is designed to produce new results all of the time for an engineer that creativity is flawed. that creativ creativity the fact that for the same set of inputs for you know for my environment and these services I want you to fix something when you come up with you know different answers and different fixes while I haven't changed anything that's not a trust building exercise so it's really about building the trust with the users with the engineers where they can actually see and that's why we came up with the community edition to give them an opportunity to experience what is it like to work with a deterministic model and oftentimes what

they do is they use as I said the generative AI AI models they tell it hey build me this and that environment and again it produces fantastic code looks beautiful but more often it just doesn't work and it definitely doesn't meet the requirements of you know their particular organization we're there then you know in that with that community edition to provide those fixes even to the generated code to make it work to ground it to reality and to make it compliant with whatever policies you need without having to come up with elaborate prompt. So it's it's really that kind of combined education and trust building exercise that that's the focus of our you know go to market approach right now. >> Yeah. No, I I had never considered the lack of or creativity being problematic in a sense. Um I mean I have uh played

around with some of the AI coding stuff like I use lovable a fair amount and um there is some interesting moments where that does occur like uh had a bug on this thing I'm building it's like an internal tool for the team um which by the way anyone in the space you know just uh if you're big if you're in automation and stuff already you can hack together an internal process tool that's way cheaper than like using make.com and paying for it all the time. Anyways, yeah, back to the back to the main point. >> I'm learning that very quickly with the >> stuff I'm doing. But there is a issue that I was running into where it was like I had a theme that was fine and I added some prompts to make it look nicer, etc. And at some point it made a

separate version of a drop down handler and it I was in like a spin cycle of an hour of trying to fix it and then I noticed one of them was working correctly and I was like don't tell me and I said are these two different components and they're like yes and I was like why did you make did you make a new there should be a universal drop down just why? So yeah, that that took me like an hour to fix it because I didn't realize it and I was tired and it didn't come to mind and um yeah, I'm sure that's some of the creativity you're trying to >> Exactly. And and I I love that example. And I love that example. You just would do that provided it's it's beyond the why did you just do that, but the fact that you've

spent probably more than an hour debugging and under and trying to figure out why is this happening and ending up with a conclusion that something was created here that doesn't match what was created there. And then like trying to and now having to fix that. now having to either migrate this model to another or you know think about the back end of what what happens there. This is exactly what a lot of engineers are frustrated frustrated with and and again it's pretty much the same same experience. You can put up something really quickly and be like, "Wow, you just saved me like five hours of coding and designing and then you end up spending six hours debugging something and trying to understand what's the root cause and where does that fail and net neck you're you're back at square one. You could have done this

yourself or you know without the debugging." And I think that the biggest issue here is the things that you haven't realized that are broken yet that you haven't realized that might be oh is that label a different class a different kind of instance of a label or a text or or or or a component and when is that going to bite me you know further down the line when I need to make some changes or or extend that. So again love the example and and again this is a very simplified example. Think about the people who are using that you know not in a kind of a lovable or base 40 44 but in actual code which is 10 times more complex and requires 10 times more kind of knowledge and understanding of what that code does. So thanks for the example that's that's fantastic

>> for sure. Yeah it's something I hacked together that's like not crazy complicated. I think it's really effective for what I'm doing. But um as I you know made that implementation of uh style improvement um I realized that there was some universality issues. So then I had to stop obviously adding anything new on and fixing that because it was just going to keep running with the same mistake. And I'm sure that that could be something that if there's not properly QA on large things, you know, like you're saying 10x more of an issue cuz it took a long time for it to it it doesn't have like a magic remove or fix button either cuz I had to tell it like 10 prompts in a row, 20 prompts in a 15 prompts in a row. Like fix the remaining ones, fix the remaining ones, fix

the remaining ones, fix the remaining ones, fix the remaining ones, and then it finally did. Um, but it took took forever. So yeah, um what are some other examples maybe like that that you're seeing um that's kind of occurring right now um that you would want to uh give an example to the audience of maybe what's what's out there for people to you know be concerned about and uh look out for and um what they're doing and how you guys have helped people uh prevent these issues. >> Yeah. No, that that that's good. Again, when you started the question, I was like, you know, that's a very very broad stroke, but but when it concerns our framing, which again, I will I will clearly say we're only dealing with a small portion of what's required for an application to be deployed. We're dealing with infrastructure.

We're dealing with the foundations of how is that going to be deployed into the cloud or, you know, in in on on prem. Actually, we're we're still dealing with the same old issues. I'll give you two two different examples. One is things that are not encrypted. You want your data to be encrypted, especially you as a consumer and and myself as well as a consumer. You want to have the level of trust in your providers, in the the vendors that you're using that whatever data that you're giving them is going to be encrypted, is going to be protected properly. It is still very common going back to your example to find and continuously find oh we forgot about this oh we forgot about that oh there's something in the you know in transition that where we take the data process it and then send it

out but that is not encrypted and you end up with another database with another location in your architecture that does not have adequate protections for the data that you have been entrusted with. So that's a classic example when when you you know when we work with our customers we often hear well yeah you like this is this is the architecture this is where the data lies and uh and that's protected that's that's encrypted or or or secured and then they automatically go and there's a lot of other areas that that we're we're still trying to kind of find out and weed out and every time something pops out and the process again it's less about finding the process of now telling you know or or reconfiguring that environment to have the adequate protections is what takes a lot of time again pretty much like your

example of oh do this as well and do that as well do that as well so you know I'm just riffing on the same example and and giving you the correlary of what what does that look like in an enterprise environment the second example is again trivial keeping things open S3 bucket is has been since pretty much the day that S3 buckets have been invented the bane of security issues and the equivalent of those basically forgetting to properly restrict access to information. Uh and again these might be you know these might be storage units that are designed to have information in the clear but should not should not be accessible to the public especially if they can can they can also be written to not just read from. So finding again finding all those things is one part of the problem fairly trivial can still

be automated to to a large extent. Applying the right controls and making sure that they're not open to the public and doing that as part of the overall architecture is still one of the biggest problems. And what our customers tell us is since we can do things in a contextual manner, we're not just slapping, you know, a stop sign in front of an S3 bucket. We're not just shutting down an instance. We're understanding what is the architecture within that instance lives and provide a bespoke kind of curated deterministic fix for that architecture to prevent access to that resource while allowing that resource architectures to still live. that we're we're being told that we're saving dozens of hours of engineering time that otherwise would have been spent on trying to kind of untangle. How does that environment look like? What are the requirements? What's the documentation?

So, these are the the the two most common kind of use cases. I'm not even talking about other other things that are more hygiene or cost optimization or even maintaining an environment and and upgrading and updating certain software components in that environment in an automated fashion that still takes into account the context and not just trying to kind of mass upgrade a software version for for a component that ends up typically with a disaster. What is your ultimate goal for what you're doing in the next three to five years? And what would you like to leave people with in order for them to better understand why that's um a vision worth working towards? >> I'll make it very quick. The ultimate goal is to save people time and especially organizations to save engineers time that is currently spent in an non-effective manner that is repeatable

that requires them to do a lot of reading and contextualizing. That's my goal is to remove a lot of toil from the process of engineering of platform engineering and that only comes with a proper understanding of what tools do you have available for you right now. One of the biggest problems that I think that that our industry is facing is the fact that Gen AI and all those Gen AI models are portrayed as the best hammer that was ever produced and everyone is yielding that hammer and turning everything into a nail or a thumb more [laughter] often understanding and and it goes back to your first question that you asked me. you know what you know when did I start kind of understanding AI and I told you AI has been around for decades understanding that Gen AI is not the only AI available in

their toolbox that there are you know a plethora of different models that are designed to solve different problems and Genai might be great at generating images and and creating text and and helping you write but it's not the right tool for a precise engineering kind of task that platform engineers and software developers often need. So I would leave them with, you know, the blessing of understanding that they they do have more than just a hammer in their toolbox that they can have, you know, a set of completely different tools and having the education to know how to use each tool for each task. That's what I think I would would leave the audience with. >> Yeah. No, that makes a lot of sense. Absolutely. Well, with that being said, I really appreciate the the time you spend here today. Um, where can everyone go to

find what you all are doing over there at, uh, your company? >> Well, uh, obviously gawk.ai is our website and I've mentioned the community edition. So, if you just go to gumbok.ai community and you don't have to talk to me or any of my salespeople. you can just go ahead and download it and experience it for yourself and uh see the difference really see the difference between a generative AI model like again a cursor clawed open AI whatever it is and a deterministic one that is fast and accurate and so feel free to download and and experiment and you know the only thing I'm asking is send some feedback >> for sure well with [snorts] that being said please make sure to go to gambach.ai AI. That's gobb.ai. Thank you so much for listening to this episode, everyone. Make sure to leave a like, comment,

and we will see you in the next one. Peace.