Leading Data Reliability in the Age of AI - A Conversation with Lior Gavish, CEO & Co founder
About This Episode
In this episode, Lior Gavish, CTO and Co-founder of Monte Carlo Data, joins the show to uncover the growing need for data observability and reliability in the era of artificial intelligence.
Lior explains how Monte Carlo applies key DevOps principles to data pipelines and AI systems, helping teams identify and troubleshoot issues before they impact end users.
We dive into how AI is reshaping entire workflows, from software development to customer support, and why organizations are shifting from experimentation to production-scale AI deployments.
Lior shares insights on building in-house AI agents, making real-time troubleshooting up to 80% more efficient, and how his team is future-proofing data reliability as businesses lean into automated workflows at scale.
Whether you're a data professional, engineer, or tech leader, this episode explores practical strategies to build more reliable AI-driven systems.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - The AI Adoption Challenge
1:15 - Meet Lior Gavish From Monte Carlo
3:20 - Building Reliable Data Systems
10:00 - Launching Observability Agents
18:20 - Scaling With AI Use Cases
26:00 - AI's Impact On Business Value
31:10 - Effective Agent Integration In Companies
38:40 - The Software Job Market And AI
45:00 - Vibe Coding And Low Code Tools
47:30 - Ensuring AI Reliability In Production
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/Ts8w3FBJTD
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
Because the truth is that most people are, you know, if you look at the adoption curve, right? Most people are mainstream and lagards, right? Like yes, there are going to be a few enthusiasts that are going to go and use CAD GPD for everything possible and kind of dance around the limitations, but but for most people, if you don't put it where where they do things and if you don't make it work, you know, reliably enough, they're just never going to adopt it. And and I think we're we're starting to get there in a few domains. most problem most probably software development but uh but but but I'm now starting to see this happening across the board in a lot of different use cases and businesses are are investing in this and it's and it's working uh it is actually delivering value and I think
we'll start to see that in in the top line and bottom line in in a not too distant future. >> Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner and AI enthusiast. You're listening to the AI agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello everyone, my name is Demetri and welcome back to another episode of the AI Agents podcast. We're here talking with Leor Gavish from Monte Carlo. He's this chief technology officer and co-founder. How you doing, Leor? Hey, how's it going Dimmitri? Fun to be here and excited to talk agents. >> Heck yeah. Yeah. So, um, as the CTO of this company, obviously you are, uh, somebody who's
experienced in, u creating agents, working with agents, but no, and people I'm sure some do know who you are and what you do. But for those that don't, could you give us a little bit of a breakdown of you yourself and what you are building at Monte Carlo? >> Yeah. Uh so my name is Leor. Um originally a software engineer. Um I uh enjoy building things uh in software and data and in in the past 10 years with with ML and then AI. Um and about six years ago um I started uh a company called Monte Carlo. Uh started with bar uh who's our CEO. Um we're also married by the way. So uh it's really fun. Uh and um what we do uh is we help uh data and AI teams uh with reliability, right? So data AI teams build things. They build analytics.
They build ML. Um recently building a lot with AI. Um and this stuff breaks. These are very complicated systems. And so um we're there to help them make those systems reliable. Uh the way we do that is uh we've brought some of the ideas from uh DevOps and applied them to the world of data and AI. Um specifically uh we are an observability solutions. So basically we help people get alerts when their stuff breaks um and then kind of troubleshoot and work through uh solving those issues so that they don't impact their users or their uh stakeholders. >> Interesting. Um and how did the founding of the company come about? >> Oh uh good question. Uh so Bart and I both individually experienced these sort of problems that we're now solving today for our customers in our own jobs. So I used to have uh
my previous startup was a cyber security uh startup uh that got acquired by by a public cyber security company where uh where I ended up building kind of next generation FOD detection systems. um and detecting fraud especially in the last few years as attacks got really uh sophisticated uh is heavily based on analytics and ML uh you can't use rules or deterministic algorithms you really need to uh learn from a lot of attacks and with that try to predict whether you know a new um new event or activity might might be uh fraudulent um and as part of doing that uh we had to deploy you know analytic and ML systems in production at a pretty significant scale. Um we we focused on on actually monitoring uh email traffic and I think we were processing about billion about a billion uh emails a day across
uh uh tens of millions of of email users. Um and um you know bar bar wanted to start a company. Uh we were brainstorming ideas and and and one of the things that came up was that you know as I was reflecting about the times where we maybe disappointed our customers um it was more often than not because the data we were using was wrong one way or another. >> Not really. Um yeah and you know and and every software application has downtime right like websites fail sometimes. Uh but we were pretty good at managing the infrastructure and the uh application pieces of it. Um and not because we were like some crazy good uh engineers. It's because there's, you know, an an established methodology called DevOps, uh, of how to ship reliable software and there's a lot of tooling that comes out of the
box to, you know, that you can buy use, uh, to to make sure things are living up to the the SLA that that are fit for your business. Um, that didn't exist on the data side like it's wild west. Um, and and it's certainly not on the ML side. And so Monte Carlo and Bar had a similar experience uh but coming at it from a different angle from the from an analytics perspective uh where she showed a lot of data products if you will to her customers and got angry phone calls on Monday morning and so so on so forth about you know wrong numbers. Um, and so we were we kind of it was kind of interesting that we both experienced that and um and what we did and I credit bar she did all the leg work uh she went out there and
spoke to I want to say about a hundred data back then data and ML teams um and to kind of see like hey are inbar particularly bad at what they do or is this something that that other teams were struggling with and Um and a and the response was just overwhelmingly uh uh interesting, right? It's just a problem that almost all teams um had. I want to say every team that that operated at at any meaningful scale. Um and it was also pretty evident that people are um there's no established methodology. there's no good tooling out of, you know, that that you could buy to to do this. And we thought, oh, but some of these problems, you know, everybody thinks their data is a snowflake. Um, but uh but the reality is that some problems are, you know, happen, you know, to every team
uh because they're actually tied to to human uh errors and mistakes more than they are to technical or or data specific issues. And so um we thought hey maybe there's something here that we can solve. Maybe there's you know a set of tools uh specifically observability that we can bring to the table to help teams do that. And um and you know that same spark that we saw in people's eyes when we talked about it back then six years ago. We we still see it today. Not not much has changed. Um but now we've we've been able to serve over 400 uh teams. >> Wow. um from you know from small startups and all the way to Fortune50 you know huge mammoths that use data in a lot of very interesting ways. Um the the only thing I that changed really is I think and
and and and I I know you spend a lot of time on it uh is really the kind of advent of AI right so I think um you know if if when we started uh it was data teams um and they had a data science function and so it would be can call it 80% analytics 20% ML and now you can literally see it on LinkedIn many of our customers and and and also teams that are not our customers are now um not just data teams, they're data and AI teams. And um and that kind of shifted the challenge for us as well. You know, if we want to help these teams build reliable, high quality products, then we need to be able to support um the AI use cases as well, right? and help with observability into >> um to AI models into agents
um and and all the good stuff. So that that that's been a pretty >> uh big shift in the past couple of years, I'm going to say. Um on that uh it's been a great ride. >> Nice. So what have you been working on recently that's been pretty cool? >> Oh, um a lot of things. Uh two that I'd call out um are probably so we've had the fortune of uh shipping uh a few agents in our own product and so okay you know our customers >> um our customers are you know like like everybody are trying to become more efficient um all the time and AI kind of opened up the opportunity to accelerate certain workflows and automate certain tasks tasks that used to be very manual and tedious. Um, and so we we we we shipped well we we we've now called it
the the observability agents uh which are a set of agents that really help automate the most painstaking parts of of of of work around reliability. uh for data teams specifically we shipped um a monitoring agent that helps users understand how to monitor their systems and talk more about that. Uh and then we shipped another agent uh this is really fresh um literally went live today. Um we shipped a troubleshooting agent that helps team you know given that something happens um troubleshooting agent will help teams um uh troubleshoot and understand what caused the issue right which is a pretty complex task. Um and and that's one part of what we do. It's kind of we we we try to make our our product better essentially by using AI. The other thing that um that that I think is equally interesting is um I alluded to it
earlier is like how do we help our customers build reliable AI systems themselves, right? Because our customers are builders and they're building agents um they're using AI in all kinds of interesting ways. Um and like how do we help them make sure that that that what they're building is reliable? And to that end, we've uh invested in uh in tooling to help with, you know, instrumentation of of AI agents. We've uh built tooling to to then monitor uh the telemetry that that comes out of it uh and ensure that the the performance and quality uh that these solutions provide to their end users um is is where you know builders wanted to be. So, >> okay, >> kind of there's two parts of our brain and and they're both uh very interesting and we we learn a lot from the first part and about how
to do the second >> and and how do you as in your position or obviously it could be people are bringing these requests to you and you're fulfilling it as uh the technical side of the company. How do you pick between whether you're going to make improvements in agents that actually help you inside of your own internal processes or um on like your product itself for for customers? >> Great question. Um we um we are uh or it's my fault. Uh I good >> am a a big believer in outsourcing everything you can and so >> Sure. Okay. >> Yeah. And and so primarily we focused on serving our customers whereas internal tooling we tried to get it from people who specialize in it uh wherever we could. So uh you know I'll give you the kind of most obvious example. Um um we obviously
do a lot of engineering. Uh we we have about 70 people on the on the product developorg and um um and and we mostly chose to use tooling from cursor and augment and and others in kind of automating software development. Right. >> Wow. >> Um because I I I I don't think we are very special in how we build software. we're we're similar to a lot of other companies and so I think other people that are focused on that problem are going to solve it better than I can. Um same goes for for other functions as well like we've we've implemented AI and you know and support and go to market and you know in a lot of different places and wherever we could we we try to adopt the tools that people you know that people can obsess about all day long and kind
of make really good. uh we're obsessing all day long on making our customers uh successful with uh with tools that only we can build. Um so for example um you know both monitoring agent and troubleshooting agents that I mentioned earlier. Um you know we we use the same models that everybody does right like everybody has access to to the latest and greatest. Um but we have a unique proposition. Uh we understand those workflows really well. We we've learned that from you know from how 400 data and AI teams do their work. Um and the other thing that we have that's very unique is um data and telemetry. Uh we've spent the last six years um figuring out how to get as much information, as much context as possible from um from data and AI stacks. Um and that is information that's not easy for anyone
else um to to to get a hold of. Um I can't say I had the foresight to understand that AI is going to change everything six years ago. >> No, >> not not at all. But um we've we've collected all this information because uh all these workflows around how to monitor things and how to troubleshoot them um uh is stuff that we use to make it easier for humans to do that in our product, right? And and we were able to do that, right? we were able to cut down um you know troubleshooting time by around 80% even before AI uh and we were able to um you know reduce the number of cases where data and AI teams are surprised that something happened as opposed to learning about it proactively from an alert uh we've been able to cut that by 80% too right
and all of that was before AI um so we have all this information and now we can use AI to further accelerate right to produce another 80% um um on on on both the monitoring and the troubleshooting side. And so that's where we find is our kind of competitive advantage if you will. And so that's where we we spend our time. Um and and you know there's other people that know how to do software engineering really well with AI or how to solve support tickets really well with AI and we let them do that part for you know for us wherever we can. >> Okay. Interesting. Um, and just out of curiosity, I mean, it's got to be a pretty special experience um, building in a company with um, with your wife. Um, how did, you know, how did that go about, I guess, you
know, like how long have you been together versus how long has the company been around? >> Oh, good question. Uh, we've been together for about 15 years now. Um, >> okay, cool. Yeah. um and and married for probably eight years or so. Uh and the company came later and it was a big decision. Uh we >> absolutely Yeah. >> Um yeah, it it could have been tragic. Uh and we were very well aware of it. >> That's kind of what I was getting at. Yeah. >> Yeah. Yeah. It can be catastrophic. Uh for sure. And so we we took the time to to think through it and and and originally it was bar kind of starting her company and me just helping you know nights and weekends with whatever I could. Um and then we eventually uh we actually had a mutual friend knows both
of us and you know borrowing to get his advice. Uh he used to work at Snowflake Snowflake back in the day. Um, and he was like, "Well, you have Leor. He's cheap labor and he can help you build what you're trying to build." Uh, and so I've I've become and what I still do today. I'm cheap labor and I try to to build things uh for for bar. Um, so yeah. and and and we just and and we kind of consciously decided we're going to apply certain rules, if you will, to this uh so as to minimize the opportunity for for a blowout. Um and so we kind of clearly define our roles and responsibilities so we don't you know butt heads. There's actually a very small number of decisions that we make together and a lot of independence and autonomy to each one of
us. Uh we've decided we're going to um you know outside of work we're going to behave as if we're not working together. So we don't talk about work at home. >> Good. >> And you could you know looking at us you wouldn't know we're working together. Um and the same applies at work. Like if you see us at work you will never know we're we're married. And um a and that's designed both to protect our our sanity, but also um make sure people at at Monte Carlo feel like they can like they're part of the company and it's not like you know Bar and Le are making decisions at you know over the weekend and letting everybody else know. And so like there's no uh there's no there's no informal there are no informal me work meetings between me and Bar. It's all part of
of how we would operate if we weren't married. And so >> yeah, >> um we do all these things uh in order to make sure that that we we are effective, you know, both as a as as co-founders and and and and as a married couple and and it we were lucky. It worked out great, you know. No, >> that's awesome. I think it's very I always find that interesting. I've never actually had the opportunity to talk with um co-founders of a company that were um were married. So, I think that's pretty cool. Uh, as as long as it Yeah. it doesn't spill over in either direction, I'm sure it's actually probably really nice because you you get it, you know, like it's hard to um probably I mean, obviously in any marriage there's difficulty empathizing about work, but at least in this regard, you
you directly have like the same level of stake. >> Very very aligned on career career goals. There's no it's a singular mission. So, it works out. >> Very cool. Yeah, that's awesome. Um, so what have been some of the, I guess, cool things that you've noticed in the industry that have changed in the last couple years because you you've seen this, um, and you know, we're having you on the podcast. It's been six years since the company uh, started, but you know, things have kind of gone crazy in the last two to three years with AI. when did you start to see uh the shift in the public landscape and in the business landscape towards oh we got to start implementing um a a a well not just agentic AI but AI in general in your company. >> Yeah. Um I think it's um my
two cents from you know uh from my seat. Um, it definitely started uh with a lot of hype and and and some would argue it still still is, but um I think like the first year of AI, there's like this huge uh gap, if you will. there's like all these amazing demos uh that that you could see on, you know, um on X or whatever and you'd read these like big headlines. Um and you could do really cool things on your own with with Craft GPT or what have you. >> Sure. >> Um and yet there was like this huge um discrepancy between that and like what businesses were able to accomplish with AI. So um the the gap was just ins I've never seen such a gap like the the level of promise was so high and the level of like outcomes was um
>> close to non-existent I would say uh you know I think Satcha Nadella at some point said that you know they're they're not yet seeing the you know productivity gains in the economy coming out of AI and you know that felt true right like um lots of promise lots of excitement But um but but little outcomes and and I think it was that point that um was the peak of you know topdown mandate like every single board every single CEO every single executive was like well we got to use AI we got to do something with AI and you know here are huge budgets go and do things with AI. Um, and it took a while, but I think now businesses are starting to catch up, right? So, so now I'm starting, you know, and I I I spend a lot of my time talking
to to data and AI teams about what they're building and um and it's now pretty evident that teams have gotten um you know uh a lot further with that with many teams already have at least some scale in production or or AI is starting to make a dent in um in in in a couple of use cases. uh some companies have been able to scale it uh far beyond. But uh but what is nice to see is that many companies have gotten to a point where there's something in production. They're starting to see gains. Um and I think a lot of it comes from um you know the productization of AI if you will. I think it's one thing to have chat GPT and try to paste a lot of things into it and and and and apply it to to one limited use case,
but um in order to really automate business processes, in order to get individual humans to do things more efficiently, um at least my two cents from what I've seen is like you have to pull together an experience that is effective, right? Uh you have to bring together, you know, the right data that's relevant to them, right? And there's examples in every domain. know I use the the do everybody knows like um yeah there's it's one thing to go into chat GBT and tell like oh you know create this script for me and it's a whole other thing to go into cursor uh that has all the context from all of your code uh that maybe is connected with MCP or whatever to your ticketing system into a lot of metadata from GitHub and what have you um and let it do the same thing like
very different outcomes right the quality of the outcome increases. Um let's and it's where uh people need it. It's not like some other siloed tool. It's you know in the IDE. It's where you are going to use it. Um and um and it's specifically tailored to that. I'm I'm sure the cursor team worked on solving a lot of problems that are very code development specific uh in order to make you know to rough out the edges and make the thing work seamlessly in in as many cases as possible. Um, and I think it applies to to almost any type of work and any type of business process. Same goes for uh, you know, handling support tickets. That's that's very common, right? Um, uh, or really any process you're trying to automate, you need to get the right data and you need to put the right
user experience in it and make sure it's available where where people are and want to use it. Uh because the truth is that most people are um you know if you look at the uh adoption curve right most people are mainstream and lagards right like yes that are going to go and use chat GPT for everything possible and kind of dance around the limitations but but for most people if you don't put it where where they do things and if you don't make it work you know reliably enough they're just never going to adopt it and and I think we're we're starting to there in a few domains, most pro most probably software development, but uh but but but I'm now starting to see this um in in a not too distant future. Well, when did you start seeing I mean are you seeing those
topline bottom line type of uh results from from your end or because I I I totally feel like the phrasiology that I heard even at companies that I was at prior to starting my own thing was just that like oh we're using AI oh we're using AI. I'm like okay using like chat GBT or whatever like in the day-to-day and I don't really know how much you're getting out of that. So what obviously you're more in the forefront, you're making things like when did you start actually seeing um these outstanding gains? >> Yeah. Um I think it's um gradually started about a year ago and it's now picking up real pace. Um and I I'll speak I'll say that both as like a company that adopted AI pretty heavily in everything we do. um you know uh a and as a developer of AI products
for others right um I'll give two examples right uh again looking at software development um I want to say that a year ago AI was probably limited to whatever five 10% of the team that were most you know early adopters enthusiast that were willing to go through the troubles of trying things out and learning uh day I want to say it's you know north of 80% of our team uses um AI coding tools uh every single day um I don't know the exact percentage but many of them are like huge champions and say that they would you know never go back uh to what was before right and and so you're you're you're kind of seeing that shift happen and I attribute much of it to the you know improvement in the tooling and and an improvement in the models of Um and and I
can say the same about you know our own product. Um a year ago I think all we had in our product was this like widget that allowed you to generate um SQL from natural language. Um and that was useful. Uh we had some usage around it. Uh but it was I would say pretty moderate. Um you know and uh second half of last year we launched monitoring agent um and today adoption is pretty broad like most of our customers um use it and many use it um you know on a in a pretty engaged manner right as part of the the weekly or or bi-weekly routine. um troubleshooting agents. Uh well, we we just rolled it out, but so you know, so we'll see. But but but I expected, you know, based on our um internal testing and and and feedback uh and everything that
I've seen over the last few months, I'm pretty confident it's going to become part of the daily routine for all our users. And so um um and so I'm pretty optimistic that the that there's going to be a that we're going to start uh um or we're already seeing uh the impact if you will certainly from an adoption perspective and and and and I think that all that is a leading indicator at the end of the day of of of revenue and growth and success for us and so um and so I'm very excited about that. Where do you think that like the average company is going to start seeing um outstanding uh agentic improvements in their company uh like adoption wise? wise. Um so again software development so you know mature in terms of tooling and you know I think you know it's here
today can be used I think all all kinds of forms of customer communications whether it's um support or sales um customer various kinds of customer service Um again I can speak both to my experience and also to the experience of people who've who built these solutions in other companies. Um it's it's working. It's effective today. Um and and and you know I I' I'd recommend to anyone uh you know doing that uh to to adopt AI and to implement solutions uh whether you build it on your own or or adopt the tool um it it's it's going to it's already having uh meaningful gains um uh massive massive reductions in in in cost and also I think better services. So, for example, um you know, many companies today, uh you know, when you're an an a human agent, uh speaking to a customer or corresponding
with a customer, um you're going to have a an AI agent open on the screen helping you out with that customer, giving you very relevant information about them, proposing what to bring up with them, uh proposing how to solve problems, proposing what products to sell. um that's been implemented successfully in many companies and and and I think uh it's going to continue it's going to to grow in adoption really quickly. Um I think on the um on the research side I think there have been a lot of gains right companies that worked with huge bodies of research um are are getting some gains there. U think you know pharma uh or things like that. uh it's helping scientists uh find information more effectively and and and actually also create um uh documentation where necessary. Uh that's working today and will continue to grow. Um of
course marketing uh you know generating content visual or or textual uh that's probably one of the early use cases that worked well. Um, I think probably like an area that hasn't seen a ton of traction yet is um, automating go to market, uh, if you will. >> Yeah. Yeah. Touch on that. What do you think about that? Because that that's something that I have seen some movement on and I haven't even been thinking about doing stuff for myself because I I just feel like it's so it's so difficult. There's so many different like hodgepodge portions of tools, but not like quite anything that does it comprehensively right now. >> Yeah. Um I I think you're right. Um there's now a cohort of companies that are trying to do this and I I I do think some of you know they will revolutionize this eventually. But
if you think about I I think so far it's perhaps hasn't come to fruition in the sense that um you know unlike some of the uh some of the domains I I described earlier I think go to market can be a lot more um specific to a particular company like how you sell the product, how you uh then um you know drive success and growth with it uh is is actually pretty different and not well documented. And >> true. >> Um you know, we'll see. But my my wild hypothesis is that um to do that well, you you basically have to get access to a lot of enterprise systems to get all that context. like you need to get uh access to uh a lot of internal documentation and to all the communication tools internal and external right be it email and slack and what
have you. uh you had to get access to to analytics right to uh snowflake or data bricks to to understand how customers are using things or uh you had to get access to the CRM you have to get access to a lot of uh call data if you have it from Gong or what have you um in order to really have enough context to um to automate go to market function functions um and and Um, I think we'll get there. I think I also know that our I I know our team evaluated a lot of tools and it's all very early. It's not quite as advanced as those other domains where I think yeah ultimately I guess there was a lot of uh open source code to train models on and have them generate uh code pretty effectively w without a lot of of of
internal context. But um but then in go to market um you know that that probably doesn't exist and so uh you're going to have to feed it with a lot of internal data to to to make it effective in in helping real humans do their work. >> Um I guess what are you using for your own strategies then go to market if it's not really like available at the moment to to do it too well? um combination. We're we're testing a bunch of tools that that um you know that we've reviewed and the team decided could could be helpful. Um I won't name names yet because I don't think we've we've se partnered with uh with any particular one of them yet. Uh the other thing we we're doing that's been I think fairly effective is um a handful of individuals um in in the
company are you know are enthusiasts and early adopters and they love this you know they love learning uh and trying new things and so basically had them create um create their own agents. I think some of them use the chat GPD tooling, some of them use uh perplexity. They've been able to create some rather helpful um simple agents uh that automate various pieces of the of the workflow um on their own and and I believe it it it it has some uh reasonable traction. It's automating, you know, bits and pieces of what we do. Um, and so that's kind of the other strategy while, you know, while the tooling kind of catches up to uh to where it can be used, you know, broadly by by many of our team members for for a lot of different uh use cases. >> Interesting. Okay. So, I
guess uh more on the I don't want to say negative side of things because it it's honestly like who knows, but where do you think the job market's going to shift? um whether it be positive, negative when it comes to uh everything going on, right? Like it's going to be I think it's going to be interesting how the job market shifts uh moving forward. >> Um easy for me to say, but uh I I think it's overall going to be very positive. Uh first of all, I want to say that all those changes are happening much slower. uh and it will continue to happen much slower than people I think uh believe. Uh it takes a long time I think for companies to really change how they work and to really adjust headcount to this. Uh again, I know it gets a lot of headlines
to say that company X are wise. Um you know, shifting headcount because of AI, but actually think it's oftentimes AI is the excuse to something that those companies wanted to do anyway more than >> oh, we've just automated half of our workforce and um and and and we're letting people go because of that. Um and so uh I'm pretty optimistic. I think some jobs will go away for sure. What we do is the work itself will change. Uh but um I think it's ultimately going to create prosperity, right? Like um and I'm boring here from from Jensen Wang, but I deeply believe in it myself. Um if if if we automate away certain things and we get rid of some c certain menial uh jobs uh demand will increase right when you reduce the cost of something uh demand increases right it's it's I'll take
something that's been close to me like software right um when I first went to undergrad uh I don't know you would code in C probably uh and was really important that you know how to allocate in free memory and you spent a lot of time uh trying to make that work. Uh and uh the cloud hardly existed. So you would spend a lot of time setting up data centers and managing your hardware and infrastructure and all that. um you know Python came uh the cloud came all these things kind of went away and yeah it's it I'm sure cost jobs right very few people today run data centers right and most of them work at Amazon Google and and and and Microsoft but um but it created this huge demand like it just opened up the possibility of solving a lot more things with software
um and that created in turn so many software engineering jobs, right, to solve all kinds of different problems that in in the past nobody would because there would be too too expensive. I I think AI is the same, right? It will yeah will, you know, reduce the need for, you know, for certain things. But um but but overall demand will will uh will increase and and we'll see uh I I overall believe it will it will bring a lot of prosperity to to humans especially if they adapt and learn how to use AI and how to to do their jobs in in in a different in a different way. Well, I guess you know, I actually just had another podcast episode today when I was talking about this. Um, you're talking about, you know, all these software engineering jobs. What do you I don't know
if you've heard, but there's actually been some companies that are um or not companies, uh, universities, right, that are getting rid of their software engineering programs and actually doing AI programs. Oh. Um, good for them. Sure. It's good to to attract students. Um, I have not seen any indication that software engineering is going >> hard >> away. So, I hope uh you know beyond changing the the the name of the major uh they're still teaching people uh some foundations of software engineering. Um but but I think both are true, right? Like um am I predicting great career success to people who don't code using AI? No. Uh I'm also not predicting huge success to people that are going to try to build things with AI without knowing anything about software engineering. They they won't be successful either. I think it's a it's it's a little
bit about um you know somewhat understanding what's what's going on uh and then applying AI you know to do it better better and and and faster. Um it it automates certain pieces but you know um I I haven't been able Yeah. Um, I'm I'm not at a point where we can hire people that are not software engineers to build software and and I don't think we're actually close to that. >> Really? Okay. >> And I'm I'm not sure we're going to be close to that anytime soon. Um, >> Gotcha. Okay. >> Yeah. >> Why Why do you think that? I'm not I'm not pushing back. I'm just curious. Like I I I I'd say that there's probably I know people are out there like for example, I'm not a developer, right? I can't use or I I can do this new I don't I mean
I guess vibe code sort of thing with whatever it is love lovable etc. >> Um but I definitely do think there is a need for software developers because at a certain point I'm capped right if it if it goes outside of database structure and API calls um and it goes beyond my knowledge of of how things generally work. kind of at a at a maxed out point. That's where I think someone who'd get maybe that like AI degree so so and so whatever would would be at. So yeah, why why do you think that we're we're far from it still? I'm I am curious. Yeah. >> Oh, um in the sense that um it's kind of like it's exactly what you said, right? Like there's um can you do a lot more today without know without being a you know a trained software engineer? Absolutely.
Um do you still need to know how to do this when you want to do certain more complicated thing when you want to scale things? Um absolutely right. Um and so I'll give you an example like our product managers for example um you know they they used to write uh requirements just like any product manager would for for a long time now. uh today they sometimes create live prototypes of their of what they're you know proposing to build um because they're able to they don't need to be engineers to do that. You can build a prototype pretty quickly. It's very helpful. It's very useful. It doesn't mean you can take that prototype and ship it to our customers. uh because embedding it in a broader system and kind of figuring out all the edge case and making sure it performs well and testing it properly
and all that good stuff still requires a certain level of expertise, right? And >> absolutely. Yeah. >> So, so I I I I I'm sure that you know quotequote vibe coding will um um I think it's very valuable. I think it's going to get better and you're going to be able to do more things without um with less complexity. Um and still there's going to be uh you know some some last mile when you want to put things in you know in production, right? Um that's going to require uh you know the type of expertise that a that a software engineering a software engineer um holds today. And so again, I think it's more about how do you merge those worlds? Like how do you merge AI and software engineering as opposed to like how do you completely replace one with the other in a
sense at least in the foreseeable future? >> Absolutely. Yeah. No, that's that's a fair point. I I see so many it's funny and this is just my little like negative commentary on LinkedIn posts. Uh I see a lot of people doing this thing where they're not devs and they say I built 100 apps with uh with like lovable in like however much period of time. Have you seen these ridiculous posts? >> Yeah. Yeah. Yeah. And >> I always I always go like okay like I I'm I am fully aware of the fact that none of the buttons work on this and that these are wireframes. >> But yeah um that's kind of where we'll be at I think for a while with with quote vibe coding. So, um, all right. I'm trying to think, uh, if there's anything else if you wanted to cover off
on, please, uh, you know, we're kind of getting to the end of the the call here. So, is there any final thoughts you had on the industry or what, um, or anything like that before we close it out? >> I actually think we we touched on a lot of, uh, on on on a lot of good things. Um, let me think. You know, as a as as a shameless plug, I would say that, you know, what what we're seeing from the front, you know, what I'm seeing from the front seat is um there's a bunch of new challenges now when it comes to AI in terms of getting things from that prototype experimentation stage to, you know, serving real users and real stakeholders at scale. Um and um and and and to me that's the next frontier in kind of in making AI work, you
know, work for the top line and bottom line like we we talked about. And so these are things like um it's it's the it's the nuts and bolts that that's hard but really important. Uh it's like, you know, making sure security works, right? That you're not getting access to to data that you're not supposed to get access to. Uh it's making sure compliance work works, right? Um you know, especially in regulated industry like finance. Um making sure you're not breaching any any rules. Uh and it's making sure that that qual quality and reliability works, right? Because exactly like you said, that LinkedIn post, you can show that one example that works. Uh when you put it in the hands of people in the wild, they're going to use it in a lot of different ways for a lot of different scenarios. and and and that
requires a level of um uh diligence. Uh that's very different from being a building a prototype. And um that happens to be the part we're focused on at Monte Carlo, kind of helping teams, you know, get from that prototype stage to okay, we're putting in this in production. We're confident it's doing what it's supposed to. Um but yeah to me the next you know solving these problems uh probably a few more that I that I'm forgetting right now um is the key to unlock like I think the models are great like the models have made this significant progress it's it's just incredible what these models can do you know just go to chatgp.com and try it or cloud uh but now the question is like well how do we you know how do how do we get it to use in you know in our
own use cases with their own data, with our own sets of users and edge cases and and what have you. And so it it goes back to the fundamentals of um you know security, observability, um uh user experience, right? I think there's a lot of new user experiences that need to be worked out. Um, it's all those things that that I think are going to get us the next leap in in in in outcomes, if you will, more so than um, you know, certain advancements in the models that that they're going to make them ever more sophisticated. >> Awesome. Well, I really appreciate those insights. I think it's uh, it's incredible what um, you guys are doing. I'd just say to everyone listening, please make sure to go to monte carlodata.com to check out everything that our guy Leor is doing. And uh yeah, anything
else? >> Thank you, Demetri. It's fun fun conversation. >> Awesome. Thank you so much, Lar. Have a wonderful rest of your day. All of you listening back at home, make sure to leave a like on the podcast, follow us on Apple Podcast, leave a review, and we'll see you in the next one. Peace.