Home All Episodes About Official Page Subscribe on YouTube
Episode 95 Nov 04, 2025 49:32 3.5K views

Building AI Agents for Enterprise with Chris Hand (Genserv.ai)

About This Episode

In this episode of the AI Agents Podcast, we sit down with Chris Hand, founder and CEO of GenServ.ai, to explore how enterprise organizations can deploy reliable AI agents to handle complex, repeatable business processes.

Chris shares his journey from leading a venture studio to launching GenServ, a company dedicated to building "digital employees" that bring automation and generative AI into the back office, streamlining operations across industries like staffing, legal, education, and equipment financing.

Throughout the conversation, Chris dives into the technical foundations of building functional AI agents—highlighting prompt chaining, multi-model orchestration, and aligning AI use with real business goals.

He also touches on the evolving AI model landscape, including Claude 4.5 Sonnet and Gemini 2.5 Flash, and why system access remains the biggest bottleneck for scaling intelligent agents.

If you're looking to adopt AI meaningfully in your business or want to understand how agentic automation is reshaping hiring and knowledge work, this episode offers practical insights.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - Back Office Automation With AI
1:37 - Starting GenServ AI
4:08 - The Power Of A Co-Founder
7:18 - Solving Real Business Problems
10:11 - Why Prompt Chaining Still Matters
17:39 - Implementing Agents For Clients
24:25 - AI In Traditional Industries
28:03 - Underestimated Power Of Gemini
36:02 - Evaluating New AI Models
39:58 - Job Replacement And Digital Employees
44:54 - System Access As The Next Barrier
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/b7UN8NWVx3
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Transcript

I definitely am in the camp where jobs are going to be replaced. I generally am in the camp of thinking that the first ones to go are probably the ones that [music] people don't enjoy doing. And that's the back office stuff that is just manual [singing] and really shouldn't require a human to do if technology was better. >> Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're [music] listening to the AI agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello and welcome back to another episode of the AI Agents Podcast. In this one, we have Chris Hand, founder and CEO of Genserve AI. How you doing

today, Chris? >> I'm doing great, Demetri. Looking forward to it. >> Yes, me too. You're calling in from uh beautiful Alabama, which is managing to have 40 degreesish higher of temperature than me today. I just checked that to be a little bit jealous to start my day. Uh calling from the beautiful brisk town of Chicago who went from 80° to 20 in about 2 weeks, but that's okay. We don't care about the weather. We care about AI and we care about you today. So with that being said, I would love to hear a little bit just to start off about you. Um kind of tell me how you got your start uh working in AI and um what brought you here and then tell us a little bit about GenServe and what you do out there. >> Yeah, I'd love to. So I've been in

technology for 15 years now coming on 20 and most of that has been web- based. Most of that has been in the startup world. Um, just recently, we'll say four years ago, I actually started with a venture studio out of Nashville, and we wanted to try and help build the ecosystem of startups here in the Southeast. Well, that just so happened to coincide perfectly with the launch of Chat GPT and the AI LLM craze. And so, as we started building companies, we thought, we're going to go allin on this AI thing. we're gonna do as much of it as we can. It's a new tool. It's a generational technology. What can we do with it that people haven't done before? So, we ended up launching six companies, probably prototyped 15 others, all of which had a core component of generative AI, and I was the

CTO of that venture studio. And so, I got deep in it really, really quick. But I wasn't deep as just a we'll say experiment. Okay. I was deep as somebody trying to actually get it to work for a business that needed to generate revenue and be profitable and have good unit economics. And um for the next two years that's what we did. And on the way, we developed a whole lot of technology playbooks and patterns that were really effective at testing, iterating, launching um custom solutions. And at the end of the studio life cycle, because it had a two-year build life cycle, we thought, well, heck, we've got some pretty valuable stuff here. Let's take it to market as its own thing. And uh as a tech guy, I was like, I'm I'm all in on this. Let's do it. And so we spun that

out into its own company called Gen Serve AI and that's what I've been doing for the last year and a half. It's been great. >> Nice. Yeah. Tell me a bit about um you know Gen Serves I guess. Uh it's been you said a year and a half and I I'm I'm kind of curious just taking uh you're a founder but do you have any co-founders? I'd always like to ask that. Do you have any co-founders? >> I do. Yeah. So the chief product officer from the venture studio is my co-founder whose name is Mark Mobly. >> Best product guy I've ever worked with. So super lucky to have him and um yeah we we grind every day on it. >> That's awesome. Um tell me a little bit more about what that's like. I don't um I'm a founder of a company. I don't

have a lot of experience but what it's like to kind of have a co-founder. What is that like? Well, so you know, you're in the trenches and I I would say that the biggest difference for me coming from startups that were still small, but you're not that small. So, you know, say 20 to 50 people and doing it on your own or close to you on your own is >> it's you feel like you're often working in a vacuum, right? You feel like you you wake up every day and you just ask yourself, especially as a solarreneur, what should I be doing today because nobody's going to tell me what to do? Um, and so having a co-founder really keeps you honest and um, also keeps you sane. So you're not just in the trenches by yourself trying to figure out what to do every

day. You're you're doing it with somebody else. And even just having a single person who can, you know, block and tackle for you day in and day out is is huge. So I'm hugely grateful not to be doing it on my own, aside from the fact that Mark is just an amazing product individual. And so when we approach talking to customers about how they should be talking about AI, thinking about it, utilizing it within their business, that really comes from product principles first and foremost. And I think that's something that's kind of lost um in the shuffle of opportunity and advances today is is coming back to just good solid principles. And so he really brings that to the table which is just you can't put a value on it. >> Yeah. Hey, you know, I think Matt is definitely I'm >> founder of my

own agency and a lot of days I'm like, man, I am what do I do? What do I tell myself to do today and that that is the I try to not talk in the mirror too much. >> Uh but that that ends up occurring sometimes. And no, I think that's awesome that you have a co-founder to work through it with. So, you know, tell me a little bit about obviously you kind of mentioned it a little in the beginning, but tell me more about what problems does your company solve on a consistent basis for companies? Yeah, it's a great question and I'll I'm going to back up a little bit because as often startups do, we've we've I won't say pivoted because it hasn't been that big of a change, but we went kind of backwards because we started with a playbook of technologies

and patterns. And man, I I I have to say that both of them are equally important because the big challenge with generative AI for me is how do you make it do something consistently reliably so that you can build a business process around it. Okay, right. I think everybody who's used AI can really relate to the honeymoon phase which is is what we call it when you try something it it looks amazing and you think I could build a product or a process around to this and then getting it from you know say 80% reliability to 100% or even 95 plus is just a bear and we found really early on that we had developed a few patterns and techniques none of which are secret that um that make that a lot easier. And so we started by being able to help a couple of

our portfolio companies with their products very reliably. The very first product that we built was a suite of agents that it's called Bidcore and we'll take an RFP. It'll break down the requirements of that RFP into a set of consistent grading criteria and then you can upload submissions and it will grade those submissions consistently according to that. um set of rubrics. [snorts] Um and we developed Gen Serve to build that product and then we found very quickly that the things that we had built in Gen Serve the platform were techniques that really at its heart started with prompt chaining. And so you not to get too deep too fast here, but we realized very quickly that oneot prompts I give an LLM a prompt, I get back a result. That's only going to take you so far. But if you break down your task into

a set of say five to 10 discrete questions, all of which are very small and targeted, you'll get a way better result. And we needed to be able to do that really quickly, which means that we weren't going to write code to do it every time. So we built a platform that allowed people to do it very quickly. And that was the foundation of the Generve platform. And so when we talk to customers and figure out what problems we should solve for them, most of it's based on repeatable processes that require manual uh intervention from a human, whether you know review, extraction, that kind of thing. And then we turn that into something that they can consume really really quickly. Um, but over time what we've shifted to is making sure that we talk to a a company and really ask, well, what's your goal

with your company? Like over time, like you you don't want to just adopt AI. No, no company besides an AI company who's trying to actually develop AI just says, I got to use AI. I mean, you know, a a financing company is trying to have as many assets under management as possible. a staffing company's trying to put um people on jobs, education company's trying to get people in classrooms, whatever. That's that's what they're trying to do. They're not trying to just use AI. And so you need to align whatever solutions you're trying to come up with with the business goals that they have. And going back to what I mentioned about our co-founder Mark, um I think that's really where product principles come into using AI as just another technology. you you got to start with the goal and then kind of work backwards from

there. >> You know, I want to speak a little bit more on one specific uh thing that you you mentioned there which I I do think has a pretty important I guess place in a lot of people's minds. Prompt chaining, right? Um I think this is something still kind of in question. It seems like only Oh, no. I don't think okay it it was first in question because obviously people didn't know how to use AI uh and then it got to a point where everyone was pretty much in consensus it seemed like on do it it's going to provide better results and then as models have improved right as we've gotten into models that are like 4.5 sonnet which is incredible um best thing I've ever used It's mindboggling that that thing exists. And I just have to ask, we're an agentic show and agents

are, you know, provided a lot of different context, a lot of the things you mentioned uh earlier. Would you say that even in the case of welltrained, well-nowledgebased, well uh background instructions etc. agents would you say that for quality tasks prompt chaining is still the standard >> or is it contextual? Okay. Yeah. Okay. Just they speak go on go on a rant about it. I'm curious cuz like I I feel like I hear from people oh we don't need that anymore. It's got the perfect background. I'm like I don't >> Yeah. Well, so, so let's talk about what an agent is meant to do. And the agents that we put in place for customers always have a consistent output that they need to be able to produce. It needs to be reliable. It needs to be consistent. And we need to benchmark it. So, just

as an example, and these are real examples from our customers, we need to be able to have 99% accuracy in extracting terms or classifying documents that are uploaded into a system. So, we're going to quote unquote train an agent to do that. Well, I I actually watched one of your earlier podcasts, I think, with Arjun um and he was talking about the difference between the models that just give you an answer and the ones that will turn on extended thinking or reasoning. And I actually think that's a great equivalence here. >> So, if I have an agent >> and it's extremely capable, maybe it's even using one of the thinking models. I give it a task and it starts breaking that task down into its set of steps that it's going to perform. Behind the scenes, it is prompt chaining itself. And that's great. You'll

get much much better results from doing it. But the problem comes in not being able to control the steps that it's taking every time. Because if you vary the steps that it takes, no matter how powerful the model, the output is going to be different. And if you think about an agent that needs to be able to work systemto system, computers are still the same as they were a few years ago where they need a consistent input and output to work with. Which means if I have a system that that expects a set of fields or properties that in from an agent every time, it better give me back those properties every time. especially when you talk about incorporating legacy systems because an agent has to have an interface with a human. Um, a chatbot window is a very free form one that honestly is

is really good for humans to interact with. But when you go from system to system, that breaks down very quickly. So, you have to have a contract for that. Well, if I have an agent and I want it to I want to teach it a set of tasks. Um, it's great for me to be able to break those tasks down into steps, but what we find as the most reliable way to turn that agent into a reliable, say, digital employee is to very clearly define when you do this task, you're going to do step one, two, three, four, and then you're going to end up with a concrete result. And behind the scenes, even if you have individual agents performing those tasks, that's the same concept as as prompt chaining originally, >> where I've got four different steps. I define exactly what those steps are

so that I can fine-tune the input and output from each one of those steps. >> And I'll actually I'll give you an example, a very very concrete example where Sonnet 45 is by far my favorite model. I use it every day, all day. And we use it for analysis within a tremendous number of our agents. Um, we we actually swap between models between providers a lot. I mean, we I don't think there are any agents that we have that just uses Anthropic or Gemini or or any of them. I I think they're all all different. Okay. >> Um, but Sonnet 45 is by far our favorite for analysis. However, if you are trying to do a tremendous amount of analysis and you're trying to get a consistent output, that can often overload even the best models and you will find variances in the output. So,

for instance, I need to get a JSON output from this agent because it's going to another agent or I'm just trying to insert a a result in a database somewhere. Well, if I ask Sonnet 45 to extract and analyze a 100 data fields and then also format that for an insert, you will find variance that you won't find if you just tell sonnet 45, hey, just do a bunch of analysis to understand what these data what these uh outputs should be. And sonnet in particular, all the anthropic models, they love XML and they love markdown. So tell it to produce it in markdown and XML. And then take another model, make it faster, cheaper, you know, whatever. We love Gemini 25 Flash for this. Take another model and just say, grab all this data and put it into a JSON payload, right? And you've prompt

chained, right? So right, that that's a prompt chain and it's an extremely powerful one, >> but it is breaking something down into two steps. And once again, what's what's interesting to me is as you see these reasoning models behind the scenes, that's what it's doing. It is yeah, it is iterating on the output itself. >> Just to speak to that a little bit more, I think so that that's a good point with agents and and and different models and something I don't feel like is talked about much in the I don't want to say the AI community because clearly you're in the space and entrenched in it and you've you've noticed the value. I think in the general colloquial uh zeitgeist of AI, Gemini is not really spoken about as uh respectively as it should be. Um I do a lot of creative uh I

do a lot of script writing, that kind of stuff for my content agency. And until Claude 4.5 Sonnet was out, 2.5 was better than like everything else in my opinion at least for for that type of thing, which is a lot of what people are using AI for, right? like they're using like they're using a lot of it to write like AI slop or whatever to go straight for themselves [laughter] >> and I'm like you guys aren't even using the right model. It's like chat GPT has been not the best on this since I don't know the whole time. And uh yeah, no I think that's that's a fair point with the one onto the other comments and how you can just chain models even in in instances. So yeah, really cool. So, as a company, right, obviously you say you help that you you

help implement these agents for your co u your different customers. So, like what what does it look like? I know when I looked at some of your case studies, you helped out um a lot of different companies, education, staffing, equipment financing. Um, could you kind of walk us through a process that you had with a client and and kind of ex help them implement a process similar to how you just showcased how you knew how to um implement an agent that had that type of chain in it. >> Yeah, for sure. So, there are really two examples we find when a company comes to us and wants to implement AI, if you will. And I hate saying that because once again I think that gets it backwards, but we'll go with it. Um, one, and this is probably about half our customers, they've tried to

implement something with AI and they just can't get it to work, whether because they don't think the models are smart enough or they don't think the output is consistent enough. They have tried to do something and just doesn't work. So they've got an existing problem. They know what that is. They know that they want to solve it. Then there's the other side of the spectrum which is I think a lot of companies today. Those companies who have a mandate from their CEO that just says we've got to use AI. We've got to jump on the AI bandwagon. And you know I see companies out there who are you take take any of the LinkedIn stories you see. I'm laying off 50% of my staff but I'm being 80% more uh productive. vibe, you know, all the stories that probably aren't true at all, but um

you see a lot of them and they're they're getting a lot of hype. >> Oh, yeah. >> So, in the first instance, Yeah. I mean, and and understandably, right, I mean, everybody wants to make more money and spend. I told you that I had, you know, made the equivalent of a 10 $100 million AR company in 20 seconds using this new tool called Lovable. Then you obviously would believe me at face value. Just kidding. I'm pointing out the ridiculousness of the the the LinkedIn sphere right now. That's pretty much what it is. It's like I built >> I built I built $100 million company in 20 seconds. No, you didn't. >> Yeah. or I killed docuign >> that one but didn't they get sued [laughter] company end up getting sued >> I I know I I know I I actually dedicated a lot of content

just around that and that's that's a whole different >> team we can talk about that later in the heart the pop culture section >> yeah sorry [laughter] >> there you go I love it um no so when when we have a company that's got an existing problem then honestly we've we've done enough of these reps with these models to know you're trying to solve a hard problem or you're trying to solve an easy problem. And maybe I shouldn't say easy, but I'll say straightforward. And the the whole idea around our technology with the Genens platform is that we can iterate and put up PC's really quickly. So we'll spin up a PC that that approaches their problem and are able to tell very very quickly, hey, you know, the goals you have in mind are realistic or they're not. And based on whether or not

they're realistic, we can just come up with a plan of attack for implementing that for them. And we'll do it like we'll we'll take it entirely off their plate and say, "Hey, we're going to offer you AI as a service and we're going to plug into your existing systems because we have a platform that'll let us do it. And you know, you'll either get an interface through email, through an API, through I, you know, what what have you, but we'll just do it." That's that's actually becoming increasingly less common. More companies these days just want to use AI because it's a CEO mandate. And that's a really hard place to be in as a business. You are almost by definition um picking up a hammer and looking for a nail somewhere within your business. And so we start that conversation much more deliberately asking well

what are your business goals? like what where do you want to be in the next two years in the next three years then we ask what are your constraints okay let's look at those constraints and see if it makes sense to try and address one of them with AI and some of that's a really easy so take one of the examples we've already touched on here which is content creation and you know you could probably come up with 50 industries whose um main capital is content of some kind and I'm actually going to lump u law firm firms and legal entities into this uh bucket as well because ultimately what a tremendous number of law firms are creating are documents that is content and it has to has to be very structured. It has to be legally consistent, has to be good, but it is

content. >> And so I I would put the very strong opinion that if you are in a industry and a company that is developing content, written word, >> and you are not using some sort of AI to draft that for you first, >> you have a very obvious and easy um opportunity in front of you to cut time, costs, all all the above. And and we've worked with legal firms before and just just to give some numbers for scale here, there are a lot of documents that have to be created bespoke for a client. They can be between 30 and 50 pages long. They need to be very well structured because of just legal um terminology, legal ease that everybody's familiar with. And guess what? Sonnet is really good at writing those if given the right context. So, if you're a company that's never done

that before and all you have access to is like clawed AI, then it's difficult to know, well, how do I take my playbook and how do I make it into a process that my business can use to cut down on drafting time or to or to increase the number of clients that I can work with. And so honestly that's our sweet spot because we come in having built a lot of products across industries without AI and we just say hey guess what now we can do a whole lot more because we've got agents that we can train to do things that required a human beforehand and we will work with them to very clearly understand here's your constraint here's your bottleneck here's where a new tool or process or agent would be able to very very clearly literally accelerate that process and this is the

timeline and the realistic expectations you can have from it and after that it's just about execution. >> Yeah, that makes sense. Um it it really is uh it's quite clear to me there's some companies right now that um well I I find it interesting. I've actually I curious on this too um because I me saw some of your use cases. you know, would you say that um a big opportunity for AI to help is actually more in I don't want to say older school or more like old-fashioned type businesses, but like I'm trying to find the right word for this. Um like HVAC is a term I don't want to what the trades maybe uh like the >> oh >> yeah like like the um what am I the administrative portion of the trades maybe or something to that effect the office type stuff. Yeah,

because I I feel like I saw some of your examples and they're kind of interesting because I'm assuming the majority of knowledge work companies as proverbally as they're called uh or people who describe themselves that you know work in an industry where all of the jobs are knowledge work uh would say that they're trying to make an effort towards AI whereas I don't know as much on the side of uh yeah the trades. >> Well, that's a really interesting question. We we've actually spoken with a few we'll say more local companies who have reached out and Yeah. And they're getting really creative with how they use AI. Yeah. So, uh, we actually spoke with a, um, they were, I think, an electrician by trade and they worked very closely with a construction company. And to give quotes, the first thing they would do on a

job site is literally just snap a picture and give it to one of their GPTs at the time, >> asking about estimates. Yeah, that's what And this was just a random contractor. And, uh, that guy's going to do really well because he's already thinking forward. And the interesting thing about it is he was okay with the possibility that it could be a little wrong, right? Which which especially at this was a year ago when it wasn't as good as it is today. And he was okay with that because for him the tradeoff was a very very clear timesaving. And to go back to your original question on opportunity, I think that the clearest opportunity in any business is back office. is is all the things that are to be totally transparent unsexy. Nobody likes to do it or you're a special individual if you like

to do it. And to be honest, most humans would love to hand it off to computers. So when you are analyzing pictures or looking inputting invoices or you know you could come up with however many things all of that should start with AI and then a human should be there to make sure that it's right to make sure that it's good but let's offload all of it and there are a lot of businesses out there today who are looking for those types of either local or larger companies that are a little bit more archaic a little bit behind the times and have the opportunity to without being too close to hyperbole here, have the opportunity to double it, triple it without increasing headcount or by lowering headcount. And the biggest opportunities there are absolutely in back office. But if you have a bottleneck in there

that could also be identified, then there's a real opportunity with the business. And I don't I don't think it should be I don't think it should be ignored. To go to go back to one thing you mentioned because this is also near and dear to my heart. I think a ton of people sleep on Google as an entry. I don't know why but I don't see a near enough accolades given to what Google has done. I think that when they went from two to 25 both in uh both in Gemini Pro and Flash, that was an entry that was just outstanding. Um, they caught our eye when they released Gemini 2 Flash because that was the first model that was so dirt cheap that you could do almost anything on it at a negligible cost, but it could do things for you reliably. Like there

are a set of tasks you can give it that you know it will be reliable enough to use. And I mean it wasn't even cents. It was fractions of a cent. And so I think people are sleeping on that. But the reason I think that's important is because the local contractors, the do-it-yourself people, the soloreneurs, they all have access to Google. It's very inexpensive. And when you can see a cohesive set of kind of environments there, if you're using Google Drive, if you're using Gmail, if you're using all of this, and all of a sudden you have access to use some of their really powerful models, then you'll quietly be using it and not even know that you're kind of on the bleeding edge of using AI. >> Yeah. You know, I got to say about the the Gemini comment you made, same here. And

I think it's cuz when Gemini first came out, >> it was awful. Yeah. Like that's there's no other way to put it. I was some would say dog water or another word, but it it was very bad. And um I think it's one of those like different instances in marketing and PR uh that cause issues. Like for example, I don't know what your opinion is on this subject, right? So, let's not factor in the new AI browsers. I know Chad GPT just released one. Forget that everybody for now. >> Yeah, >> Microsoft's had arguably the better browser than Chrome for 6 years. >> Nobody knows. >> Why does nobody know? Cuz they failed so hard the first time. >> And they just assume. And it's they can't get over it. They'll never get over it. It was the biggest botch ever. Microsoft Edge was awful.

Sorry, Microsoft Explorer was awful. the initial launch of Edge comes out. Nobody understands it's based off of a Chromium based backend and not whatever the hell they were using the first time. And then people are like, well, it's the Microsoft one. It's got to be bad. And there was a I don't know if it's still the case, but when I did the research and switched to it like four 3 four years ago, it was outperforming um RAM management by every metric. Not even close to it. And what's everyone's complaint about Google Chrome? Too much RAM. So, um, point being, it was better for the average person for years, but nobody knew because they failed at the beginning. And >> yeah, I mean, that's that's totally fair. In fact, I think if you went back through my LinkedIn feed, you probably will find something about how

Google was just out of the AI race because of how bad some of their releases were. Um, and then months later, I think I had to eat my words and pleasantly so. But I I do think that there's a key difference between somebody sitting in my seat where I'm actively trying to keep up with the best things um at every point in time and somebody who's just more passively using it or tinkering with it. And perception is huge. I think that they've done great with their public perfection uh well perception around VO. Yes. And you know Nano Banana because those were those were really strong and yeah really really impressive. Um but on the model side you may be right. It it might just be purely that they shot themselves in the foot so badly early on that people aren't trying it. The the other

thing that I found um just anecdotally is when Gemini first released their version of Deep Research, I thought it was better than OpenAI's original version of Deep Research. >> Thank you. And >> um and honestly better than Anthropic. >> And I was trying to pedal this drum on the show. I was like, "Guys, guys, you want the most cost-effective AI right now? You can spam three deep researches of Gemini at all times, non-stop. Did you know that you can just unlimited spam Gemini Deep Research and make agents? >> And people were like, "What? Why would I do that?" I was like, "Well, it's better than Chat GBTs and it's unlimited, so why would you not?" And Chad GBT would limit you like 10 per month or whatever the hell it was. So, sorry, that's my pedestal. >> No, agreed on both accounts. But I'm going

to ex I'm going to take what you said a little further because from a cost effective perspective, you cannot beat Gemini 25 Flash. And I I I'll I don't know if I'll die on that hill, but it's a it's a strong opinion, strongly held. Um because it actually has a thinking mode to it if you'd like to do some deeper analysis >> and it is very inexpensive. If it's actually between, say, if you were to look at the GPTs and they've got their flagship models and they've got their mini models and the mini models are generally way cheaper. It'll be between that. Um, it's not the cheapest on the market, but it is very inexpensive, but just the bang for your buck that you get from those models cannot be matched. And so, I'm really looking forward to seeing whatever they do with Gemini 3,

uh, which is rumored to be announced. Um it is rumored to have already been soft launched. >> Uh it's also rumored to be announced sometime soon and we'll the >> it's very difficult because for me I see these new model releases come out. >> GPT5 was the most recent one which I personally was really underwhelmed with a aside from the backlash around you know you don't have to switch models it'll switch for you and they they kept funneling everybody to the cheapest worst model. So everybody's experience was terrible. I was pretty underwhelmed by the benchmarks with GPT5. It was a incremental upgrade, but honestly, you had the entrances from Grock. You had the updates from Anthropic. And so it was just like, okay, cool. You put a new one out. >> Great. Sure. >> Um, that is actually the least used model for me right

now is the GPTs is I I just don't use them at all. um whether in production for our customers or personally just don't. Um but I think that all of the models have gotten so good that as creators, as somebody who's making a living out of it, you kind of have to ask yourself the question, well, where are the existing ones falling short? Like what do I want to be able to create that I can't create with the existing models so that I get excited when a new one comes out? And there are some things that the models need to be improved on. Vision is a good example of that where a human is still way better at reading a page of text and when it's not the best page of text, coming up with exactly what it says. A human will do it 100%

of the time. A vision model will do it 90% of the time. And 90% doesn't cut it when you're trying to automate something. So there are advances to be made, but for the most part, the more interesting thing to me is, can you make it faster? Can you make it less expensive? And if you do those things, you make it more accessible, and so I can throw more tasks at it that I wasn't able to before, and I have more of a business case for it as well. So really curious to see what Gemini 3 comes out with because so far I think they've been an extremely strong entrant and I would love to see that trend continue. Sonnet 45 did not disappoint. It was way better at many things. So I want to see if Gemini can keep up. >> Well, yeah, and I

don't even know. I mean, we're on model talk right now, so just out of curiosity. Um I haven't checked. It's probably one of the less used models. Um, what is the 4 point what is uh Haiku's 4.5 cost? It's $5 per output, one per input. >> Yeah, it's a third of Sonnet 45. So sonnet is um $3 per million input, 15 per output, and Sonnet is exactly oneird that cost. >> Yeah. And it's probably also it probably is also like 80% faster. >> it's much much faster. Yeah, but it's always going to be that's kind of interesting. They they seem to sit on the the most expensive relative model tier. >> They do. >> Yeah. So, that's uh that's something to consider, I guess, as it's good, but I don't really know how many people are actually using it because they're in this awkward place

where it's cheap, but like it's not as cheap as, you know. >> Well, I'll give you an example of the companies that are using Haiku, are using Sonnet, and they're almost locked. So there are a lot of customers out there that you don't hear about who are in AWS and because of their corporate security requirements because of whatever regulations they're under data can't leave AWS can't leave their tenant and so they are stuck on bedrock and it is almost prohibitively expensive right now to run your own models on bedrock. I know that there are companies that are doing it, but you are still better off taking a flagship model, a foundation model from Bedrock. >> Well, the best foundation model on Bedrock is Sonnet 45. No surprise there. It's one of the best in the world anyway. But the fact that they offer Haiku 45

now as well is a model that is one-third the cost. That is onetenth speed probably. And all of a sudden, I have a cheaper model that is still quite capable. Anthropic says that it's as capable as um Sonnet 4. I'm not convinced of that yet. We'll have to see through a few more benchmarks, but it is quite capable. So, that's a strong entry there for a lot of companies who are stuck on bedrock and can't get out of it. >> You know, I don't think people at least in the space that maybe our audience is at um really is talking about bedrock that much. And that's um it's not even something that's come up. this is the first person on the I've interviewed that's actually brought it up. So kudos to you on that. Um you know just just kind of getting more into something

that I I do want to ask because I think it's it's quite frank kind of required at some point. Uh what do you think is the future for job replacement um job enhancement, job market with these uh additions of companies like your implementations and then just the general market of AI continuing to improve? What do what do you think that looks like in the next couple years? are going to be replaced. >> I generally am in the camp of thinking that the first ones to go are probably the ones that people don't enjoy doing. And that's the back office stuff that is just manual and really shouldn't require a human to do if technology was better. And so I I understand there's probably a human doing it today. They've got a job. I'm glad they have a job. But I do also firmly believe that

different jobs are going to be created because of these new tools. And I consider AI to be a tool. I I mean we we have created what we would consider to be a digital employee using agents before. And that's actually a common model for us is to create a set of agents that does a job for a business. And we would consider it a digital employee, one that you have to continually train and that can learn and that can get better but has to be managed. So I think the job market is already being impacted. I think that anyone who is coming onto the job market or is on it today should be spending time uh understanding these tools, understanding what can be done, can't be done, and be that person on your team who becomes twice as efficient or twice as fast at doing

their job because you're adopting technology. And you know what's interesting is I I consider all of these things just to be technology. They're generational technologies. You can't compare them to many, but it is still a tool and a technology. And so, how many different companies have released major features that all of a sudden got rid of the need to have a certain set of team members or a certain um set of requirements on your team. For instance, when support automation frameworks started getting better, we'll take Zenesk just as an example. Zenesk introduced a suite of tools that started classifying the high priority tickets and started um triaging the ones that were most risky to your business. Well, great. All of a sudden, you didn't need somebody whose only job was to triage tickets and to classify them. I think that's a good thing. I think

that's probably not something that should be done in a company. And that's a really light example, but extrapolate that to a lot more day in dayout tasks. And jobs are going to be replaced. They'll be replaced by something else, not just AI. And I don't think we even know what new markets are going to be created by AI. I think that there will be new markets. It's really hard for humans to to think of what those markets would be, but when those new markets are created, a whole new set of jobs will be created with them. And I think that's exciting. >> I think that is exciting. And I do have a little a question, I guess, about that a little bit more. Like it it does feel like maybe we don't know fully what's going to happen, right, with a lot of this like

um adjustment, but I do think you're on the right track. It's um I think there's different approaches to take. I think it's somewhere in the middle of a lot of the extreme takes. I do think the phrasiology I've used on the podcast for the last couple months is like associate level I don't want to be mean. So I try to find the nicest technical way to say it. I think marginally non-deterministic associate level uh knowledge work is the nice way to string together technical terms that basically you know what I'm talking about the low-level like associate [snorts] work where it's like anyone can kind of get hired and onboarded in like a week and kind of do the job. Do you think an limitation that will be overcome with this, especially with like MCP coming out, I thought that was really cool, is sort of

the connections to all the different aspects of software. Uh cuz from my perspective, I feel like from a reasoning standpoint, the models are going to get there. They may already be there. It's just a matter of like practically building out agents that have connections to all the different aspects of like, you know, I don't know if you've ever worked in paid ads, right? But there's like no API connection that allows you to click through the Google Ads interface and change bids, right? So, >> yeah. No, I I think that first off, I agree with your assessment. Um, and I actually think that they're kind terms, but I think they're extremely appropriate terms. >> Yeah, they're very technically accurate. It just doesn't sound as bad as saying someone who's not really that technical like not that act doesn't isn't is low value work. Low value work

would be the mean way to say it. The nice way is marginally non-deterministic uh work. >> Yeah. Well, and and if you can teach somebody to do it inside a couple of days, especially by just handing them a handbook or rulebook, you could probably hand that rule book or or handbook to somebody else to an agent and it'll probably do a pretty good job. But I do think that the larger barrier to adoption is an access. Access to your data, access to your systems. Where does this agent live? A human just because by nature of being a human and by nature of of being the one to drive every system that's out there, can access email, can access can put hands on keyboard, can um autonomously do all of these things where I don't need to have special access points for them. But all of

a sudden when you have an agent on a system, then short of giving it access to a computer where it can do anything on the computer and just expect it to um be able to complete a task and they're not there yet. They're they're they're bluntly not even close. Um I think they'll get very close. Um, yeah, I think that a lot of companies are going to spend a tremendous amount of investment trying to make the access points for their systems available. And the more they can open up those systems as accessible, the more opportunities they'll have to automate certain key flows and that'll represent a that'll be a gamecher for them. No question. >> The true meaning of the word game changer, not whatever JIGBT decides to try to say. uh when you're asking it uh whether this workflow that you've cocked up in

your head is uh appropriate for the thing that you want to make. So um you know what I think I think that was that was a great point. Um that's been my contention for a while is the second I saw like reasoning models come out I was like it's just access right because if they were able to >> Mhm. [snorts] >> get their fingers on everything um it'll work out. And that's where these agentic browsers get interesting, right? Cuz then once we get into visual plus reasoning and you know I remember when 03 came out I was like here's this picture figured out within sec I was like where where is this? Figured out within 4 minutes I was uh at a Greek fest in uh west in Greek town in Chicago and I was like that's >> that's weird. Um >> that's that's impressive.

>> I was like we're getting somewhere. No, I I think that's an interesting call out though. I think that 03 represented a paradigm shift for everyone when they realized that not not only can this thing break down its own train of thought and follow it, but that translates into it being able to do things that couldn't be done before. like like it it expanded the realm of problem solving for these things and that made everybody rethink what can be done, what can't be done, what will be done with these models that couldn't be done beforehand. And so if if your first thought was, oh, now it's just about system access, I think that was really kind of precient because um that's I think it's right and I think you're seeing it play out today. >> Yeah. And then like a couple months later, MCP came

out and I'm like, I'm not a prophet. Just um just keep keep your thoughts to yourself. But no, seriously, it's you spend enough time in any industry, you can make educated guesses. And I think that was one of the few that I got um decently accurate. But um with that being said, man, we're at time. So, you know, not to um uh do anything else here, but just plug everything you're doing at Gen Serve and tell them where they can go so that they can check you guys out. >> Yeah. and really appreciate the conversation today. It's been a lot of fun. >> So, if you're a business who is really asking yourself the question, how can I use AI? What should I be doing with AI? How do I think about AI? Then we'd love to talk to you and understand how your business

should adopt AI and what that roadmap should look like. So, check us out at genenserve.ai and also look me up on LinkedIn. You can find me at Christopher-hand. and I post all the time about real business problems that are being solved by AI and a lot of our case studies. And if nothing else, I just love to chat about this stuff. So, we can chat about it anytime. >> Absolutely. Well, with that being said, thank you so much uh for making the time here, Chris. And thank you to everyone for listening, watching. Uh please make sure to rate this show and that's including you, Chris. want to get your views up, go to Apple, Apple Podcast, Spotify, uh review the show and uh make sure to check out everything that Chris is doing at Genserve. That is generve.ai. Generv.ai. Thanks for watching. See you in

the next one. Bye.