Home All Episodes About Official Page Subscribe on YouTube
Episode 72 Sep 02, 2025 45:05 15.9K views

Mark Cowan's Smart Automation System - How Put It Forward Powers Business Growth

About This Episode

Discover how automation and AI are transforming modern enterprises in this captivating episode of the AI Agents Podcast.

Mark Cowan, Head of Product at Put It Forward, shares how his team has been using AI-driven decision automation to help businesses scale operations, personalize customer experiences, and boost performance—with fewer manual touchpoints.

From revenue optimization and churn prediction to project portfolio management and back-office efficiencies, Mark unpacks real-world enterprise use cases powered by long-term memory AI agents.

We also dive into the nuanced differences between automation, AI, and agentic systems, and explore the crucial role digital labor will play in the future of work.

Whether you’re integrating AI into your customer workflows, enhancing decision intelligence, or curious about how agentic AI intersects with traditional automation, this episode offers deep insight into the tools shaping the next generation of enterprise growth.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - AI’s Role In Modern Enterprises
1:07 - Introducing Mark Cowan Of Put It Forward
2:06 - Early Enterprise Use Of AI
4:40 - Predicting The Future Of AI Integration
8:00 - Real World Use Cases For AI Solutions
14:00 - Understanding Automation Vs Agentic AI
21:00 - The Limits Of Generalized AI Agents
26:00 - How Agents Could Replace Entry-Level Roles
39:00 - Enhancing Decision Making With Agentic AI
43:00 - AI Implementation Advice For Enterprises
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/0EtzlPps0S
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Transcript

If you look at the modern enterprise today, what's the impact of AI on it? The answer is in it's already in the enterprise, which is that what's the impact of automation technology on the enterprise, right? So, if you go and I'm I'm going to be arbitrary. Go pick a a large financial institution and you say, well, they have 100,000 employees. How many employees would you need to actually have if you didn't have those systems there today and still be this running at the same scale and save profitability and all that sort of stuff? All things being equal, I would wager that it would be at least 10 10 to 15x just to actually kind of have that same business there. And so the impact of automation inside the enterprise is already there. We're just taking it kind of to that next level. And how do

you go further? How do you go further with it? Hi, my name is Demetri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI Agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello everyone and welcome back to another episode of the AI agents podcast. In this episode, we have Mark Cowan, the head of product for Put It Forward. How you doing Mark? >> I'm doing great. Very great to be here. Thank you for your time. >> Yes, for sure. Um, so you're based out of Austin, which is obviously like a tech hub right now. It's very cool. Uh I think that there is a lot of interesting companies

coming out of it and you're guys this is one of them. So I am curious first and foremost how did you kind of get into the uh position you're in? You've been with put it forward the entire time. I think uh for me it's always an interesting question like how does someone get into AI because while it might seem like something that has been at the forefront for of our all of our minds for the collective last 3 years um you've been at the company for over 10 and like you know seeing a transition I think like that with a company getting into AI someone who's been around in a company like that for that long that's always cool to me. So let's hear the let's hear this backstory. The backstory um just had been in a number of you know operating role and product

development roles and you know leading organizations some you probably use their own technology um and running a really large technology team in a couple of large financial institutions um we had always been using AI in one form or another for the last really kind of you know two decades if you think about >> okay >> financial systems there's a lot of uh you know machine learning and artificial intelligence isn't necessarily new to them it's just very narrowly applied I think in terms of the the broader application of AI which you're speaking about which is really in the last kind of three years is around you know large language models and you know with a chatbased interface on top of it that's something that's really kind of gotten into the you know the popular zeitgeist as it were but in terms of using AI and using

it as a function inside of an application it isn't new to put it forward we've been innovating with it for probably as part of our product close to eight years now. Um and we have it applied in a number of different scenarios. Now it certainly has evolved to include um you know the the public based AI which is you know sitting on top of large language models etc. And we have aspects of that inside of our infrastructure and our product. Yes. >> Okay. Yeah that's a that's a whole thing to talk about. So yeah I I think that's a fair point. Most people think of AI in the context of uh generative right I don't or and now agentic I guess in the last uh maybe calendar year probably more like 8 n months um when you were starting out at put it forward what

do you feel like you guys predicted in regards to where AI would be at this point in time and how far off was that prediction? I like this question too for people who have been a part of it for a while. Um first we were very grateful for whoever came up with the nomenclature around agentic. uh because it it frankly simplified a lot of the conversation about the way put it forward was positioning and the problem set that we solve for in our customers because often we would say like oh look here we connect the systems and we connect your business process and your workflows and we have this AI stuff that figures things out and makes it better and then it tells you what the next thing to do is and you know there was always a you know uh depending on who you're

working with there was a very quizzical look of like what exactly is that um and how do I describe that to the other people inside of our organizations and and that was you know uh something when we saw the you know the definition of agentic come along you know really helped us in that regard um in terms of where we thought it would be applied and where we are in the market space not radically different um in terms of the value that we bring into an organization so I think it's important just to describe or kind of maybe level set in terms of the different types of AI that are out there. So when you think about large language models that are being used, so in answer to your question about the the different kinds of AI, when you think about large language models and

a Gentic AI and kind of what's in the popular media right now, a lot of those are based on, you know, short-term memory inside of an organization, right? So you're taking something that's immediate, you're asking it a question, whichever kind of question it may be, to build something, to respond to something, to think about something, to form an opinion, research something, etc. and they give you an answer, right? And immediately all of the context around that is lost. The input is gone. the reasoning model largely is finished doing what it's doing and it's discarded that versus the another way of looking at AI is where you are taking the long-term and the institutional knowledge and all the information that goes into um making a prediction or making an outcome or a decision or choosing the next best action and using that to define sort of

what your model is and then basing it off of that. Right? And so there's two different types that are out there that interoperate with one another on some level, but they are unique and distinct from each other. And so we kind of vetored initially down the path of leveraging institutions long-term memory. And what we mean by long-term memory is all the things that they do. If you look at an organization, and you can pick one, it doesn't matter. whatever industry they're in. They really are a set of algorithms defined as standard operating procedures, ways of working and that applies to marketing, sales, operations, technology, HR, finance, manufacturing, building, shipping, etc., etc., depending what industry you're in. Right? So, when you look at an algorithm, when you look at it from an algorithmic perspective, you say, okay, hey, there's a series of inputs and there's a

series of outputs kind of at each step. um what can I learn from those um how can I model those and then how can I improve that decision with a better outcome or a um better information going into it which actually informs frankly the name of our company which is put it forward in that case right it's like you know the best input the best decision put it out to the next person or the next you know step in the process to make it better right um and that's kind of how we look at at the two Um, and each of them kind of have a trajectory that interplays off one another. >> Okay. So, what does the average like company that you work with uh look like in terms of the size or No, I think that that's one part of it, but um

I know we talked pre about this like kind of like some of the industries you work with, but um what are their problems that they're facing, right? Um what are the size I think is another good question. And yeah, I mean maybe a use case uh or uh type of like aspect of your tool that you use because a lot of people are just probably just thinking like what tangibly in a company, right? Um does uh I keep wanting to say the wrong thing, put it forward uh put it forward uh help us with. >> So there's there's front to back, right? So kind of at the front end on say you know revenue side and revenue generation there's things like identifying next best customer next best action um churn risk revenue at risk um those types of scenarios that are revenue attached um mid

office you can think about things like project and portfolio management um where you have teams that are executing and they need to be synchronized with the you know kind of the next thing they need to do very quickly and how do you optimize that into you know back office functions like um you know finance and HR um where you are trying to manage and mitigate risk and you're trying to optimize what you're doing with you know uh fewer people and more systems at the same time. And so we kind of step into those scenarios to help you know automate uh you know kind of really decisions that are being made there as effectively as possible the types of organizations and the type of industries. Um I can probably pretty much guarantee you've used our software in some way depending on you know what you're doing

out in the marketplace. If it's, you know, booking or booking a, you know, table in a restaurant or, um, you know, booking a flight or, you know, purchasing an investment product or something to that ex to that effect depending on, you know, who you're dealing with. Um, you've probably used our technology in some way as part of that um, and part of that customer experience. And even on some social media platforms as well, we kind of have supported them as well. Um it depends on you know the type of problem you're trying to solve for. But often our customers come to us with you know real tangible problems. They're like okay we we need to get better at getting keeping the customers we have. We need to maximize the amount of um share of wallet that our customers have in terms of cross-ell and upsell.

um we need to get better at keeping the attention of the customers that we have sort of on the front end. How do we how do we leverage what we have? Now, if you think about how would you do that with an LLM, the answer is you wouldn't, right? You couldn't you couldn't do that at scale without um actually having largecale, you know, data going into it that represents your institutional knowledge, right? Um, so you need to have a kind of different approach than that and you need, you know, machine learning and artificial intelligence and the same same kind of infrastructure that you would use with an LLM, but you're applying it differently. And then in then integrating it into a workflow and a process flow, etc., right? >> Nice. Okay. Um, you know, I I think what's interesting about this, right, is a lot

of the verbiage you use on your website and talk about still has the term automation uh on there where I feel like most people have kind of and obviously there's AI on here too. Uh, most people kind of utilize the phrasiology now of AI as you know the default. Oh, and agents, which you do have as well on here. Could you kind of describe to the audience what you feel like the the differences are between AI automation AI and automation and maybe even kind of how like agentic AI maybe intersects those two things. I feel like that would be something good to cover. >> Sure. I mean, automation is something that allows you to do uh run a process, run an outcome with fewer hands touching it, right? Basically, or fewer. >> That's a good answer. >> I like that. >> Very simply, >> if

you look at the modern enterprise today and you know, thinking forward to, you know, maybe some follow-on questions here. If you look at the modern enterprise today of like, well, what's the impact of AI on it? Well, it's the answer is in it's already in the enterprise, right? So, if you go and you know, I'm I'm going to be arbitrary. Go pick a a large financial institution and you say, well, they have a 100,000 employees. what would how many employees would you need to actually have if you didn't have those systems there today and the and still be this running at the same scale and same level profitability and all that sort of stuff all things being equal I would wager that it would be at least 10 10 to 15x just to actually kind of have that same business there and so the impact

of automation inside the enterprise is already there um we're just taking it kind of to that next level and how do you go further how do you go further with it so in terms of automation. Yes, it's about being able to run something with fewer hands or fewer eyeballs as part of that process. Now, there's also automation in manufacturing. Um, there's automation in uh discrete uh, you know, chemicalbased processes or deep, you know, kind of manufacturing or automated processes as well. There's there's a number of different places where you can apply it. there's also automation that you can apply to decision- making um you know on the customer side you know on frontline you know types of scenarios and so on. So that's number one. Number two in answer to your question is you know what is a gentic? Um you know a gentic is

uh really where you're given um autonomy to something that is part of a process to make a decision for you. This is done every day uh out out there as well. Now, we're just applying it to other parts inside the process. An example of an autonomous decision that's made for you today is um if you've ever had the opportunity to be in a self-driving car, that's a pretty fancy >> I haven't yet. >> They're pretty cool. >> It's a little >> They're in Austin if I'm not wrong. Right. Like uh Tesla just released uh or not? Well, they are beta testing or released there. >> They have some there. Yes, they some the way the Whimo cars have been here for >> Oh, yeah. Yeah. Yeah. Is that the Google the group that was like from Google? Uh yeah, it's still Google as far as

I understand. Um >> I mean from a scale perspective, they're doing I don't know what they're doing, but I think they're doing like something like 50,000 trips a day or something to that effect. It's quite significant. Um but that being said, um that thing makes decisions for you every day. If you're inside of an airliner, you know, flying around, there's an auto, you know, the pilots take off and they take land. they, you know, they take takeoff and they land, but there's also a ton of automation systems there that are that are helping them out, you know, especially on long haul flights. You know, there's an autopilot and it makes decisions based on various inputs. So, there's a number of different automation systems that are out there. >> Um, and then u you know, so where's it going? I think kind of was was the

intersection point, right? >> Yeah. Yeah. >> There's a couple of popular ideas out there um that are being experimented with at large. Um you have automation systems inside the enterprise which are very discreet um you know robotic process automation systems where they do things like um you know if I have three different systems I'm you know I've got to fill in a bunch of fields here and then when I click enter I take the output of that and then I cut and paste or you know do some other stuff and put it into this next system in the chain. Those systems effectively make scripts which are kind of fancy macros that input and complete the complete the next step in the action or the next step in the chain. The um uh the intersection of that with large language models you know the hope there

is that people say well I can now start you know creating the input into those and you know driving it on its own. So that's one approach. Um there's also another approach which we think about a lot at put it forward is like from an algorithmic perspective which is like okay those are series of actions that necessarily don't need to involve an end user. >> Um they don't have to have the person in front of the screen. um you actually just need you know an event or a d piece of data something to actually happen to trigger a series of outcomes and how do you manage that across the systems right which then may involve a human right um at some point in the process right so there's different schools of ways to approach that >> interesting yeah so I think what was what I

liked about what you just talked through there is that there's actually been some small uh iterate not iterations but implementations of automation for a while. You know, I'd never considered the concept that the autopilot could be classified as automation, but this is the type of stuff where we I think in the world that I'm in, it's a lot more of like specific knowledge work business type automation, right? um less on the physical side, which I I would probably categorize that as like more physical um like tool like um what's the word I'm looking for? Um >> machine >> machine. Yeah, sorry. Yeah. Gez. Right. Yeah. So machine Yeah, machine automation is not something I had thought about. And that's kind of probably been in in the world for a while >> versus now what we're seeing with with uh >> the knowledge work side of

the knowledge work. And and so there's there's a there's a question to kind of ask in that side of space and where you think about sort of the future is okay well what's the what's the ratio of human labor to digital labor that's the real question to you know kind of ponder inside of there and and how do you use digital labor in a very effective way um I mean I know inside of put it forward we have probably the equivalent of a dozen you know full-time digital laborers ers, you know, doing different things. Now, do they run autonomously and can they run without a a person interacting with them? No. >> Okay. >> Our business, not necessarily kind of the way we apply things to a customer, right? And that can be as simple as, >> you know, you're writing a um a requirements

document for something or you're writing a marketing uh piece of collateral. um how do you use some of those tools to do research for the subject that you're involved in versus you know you're doing it yourself, right? Um those types of agents um you know you certainly make a difference and also the popular ones that are out there around uh you know coding and you know the the co-pilots which are really you know or agents really now in a lot of scenarios are just you know rebranded co-pilots. Um, but they are what they are, right? They they kind of sit alongside you in the screen and you can ask it a question and you can probe and go further into it and and help kind of augment your own, you know, learning as it were around that >> at a higher speed. >> Okay. So,

when do you feel like we'll be able to see I guess two things. one for your own company. When do you feel like we'll maybe get to autonomous agents internally, if if you think that'll even happen in a reasonable foreseeable future? And when do you think that that implementation you currently have right now um will be consumer ready, you know, on like a consistent basis cuz I can imagine there's a lot of context and a lot of processes and a lot of really strict guard rails you're putting on this agent to prevent error, right? Uh in in your own workspace. So like when do you think like I said a you'll be able to have those run decently autonomously internally and then what you currently have now those agents when do you think that those could even be consumerf facing on a more readily available

basis. So the answer to the question is in part how granular you want to be. So I think that um you know the topline answer is we already have autonomous agents running inside of put forward and inside of our customers environments. >> Right. They are part of uh our customers workflows and process flows. The the way you do that is actually by getting very granular on the the level of work that you want it to do. um where we see a lot of conversation happening in the popular press and field out there is that there's a notion of having a generalized AI agent that can do >> work that it's not you know Yeah. Exactly. that it can just do everything right now. >> Yeah. >> Now, now think about that from your own perspective, right? you probably had a first job somewhere and you

know if you went in there and they asked you to do kind of everything that you're doing like say for this podcast you'd be like well wait a second here uh I came from this area I've got to learn this I've got to figure out a whole bunch of stuff like that's not that's not a reasonable expectation and so you have to be able to um you have to be able to have something that's focused on a very specific set of tasks in order to make it work right that's why it works well with coding Right? Because coding is highly structured, right? And >> that's a good point. Yeah. I didn't consider that definition, >> right? And and it's highly structured in def in describing a set of tasks and problems and things or whatever that it has to do. And that's why you can

train it on. And most of the LLM models out there, the initial training set, frankly, was on code, right? Um and that's why you get a lot of, you know, a very um uh you know, a very highly structured set of data when you responding to it. the rest of us have been trained on, you know, for the most part on the open internet. Um, which, you know, opens up other questions. Um, so the answer to what you're describing is that it's about how lowlevel you atomize the problem that you're trying to um, apply an agent to and if you can be very discreet against it. For example, you know, say you are um you know, you're you're you're a company that is, you know, dealing on the consumer level or on the businessto business level with people that are buying and selling stuff or

buying stuff from you, right? um and you're dealing with that at scale and you need to be able to say well I need to personalize this in a way or I need to be able to identify you know is this someone that actually has the the probability of converting or the probability of churning or going away you know from us and going to a competitor and how do you do that at scale right I mean that's a that's a function inside of you know you could argue it's marketing or revenue teams or or whatever but you know it's a function inside of the enterprise that you know someone kind of has to do that. Um but when you start dealing at scale it's just not possible. And what I mean by it's not possible that's not people aren't smart enough there's just too much data

to actually manage you know cognitive level in a reasonable time. So you have to you have to Yeah. Exactly. So you have to slice that into a small function. that sort of stuff, you know, you can do all day long with with AI, you know, like from put it forward. If you wanted to apply it to like, oh, hey, I want you to actually become my, you know, new marketing director, Mr. or Mrs. AI digital labor, that's just unreasonable. U, but as an assistant to somebody who's working inside of there, that's a very reasonable uh expectation. Yeah, it's a very good point you make and a a theme that we've been talking about on the podcast uh recently and I'd like to ask each guest is you know one good point there's a lot of misconception I think in the general people of who are

not directly in the know of hey I want to just have an agent that does a job and you kind of not does a job in the sense of a task they want it to to an entire highlevel uh very detail oriented executing performer at a you know seuite level is kind of what their magical expectation is but what you articulated is that you know task-wise yeah they can do these things they can pattern recognize about a specific thing um there's very interesting uh situation here though where that begs the question and this is not to belittle associates I hopefully I haven't done that in previous episodes, but associate level roles and like marketing agencies for example um or in a lot of different companies are for all intents and purposes a list of if then logic that they were told by their manager to

do, right? But they're just doing the the physical clicking on the screen, so to speak. I remember being an associate marketing manager and an ads agency. I don't think my job was actually practically hard. there was just a decent amount of volume because of the kind of way that that shakes out. Um, when do you feel like the string together of tasks will reach the point to where people at an associate level who have the least amount of experience, they don't have to make these like crazy strategic decisions when those roles will maybe have high level competition with products that are selling agentic or agents. Um you know it's funny we we deal a lot with uh marketing agencies as well for one reason or another. Our marketing or you know kind of revenue teams and so you know we see an impact of that

frankly right now. >> Nice. >> And and but it's not that you are you know hiring your endto-end you know marketing associate. What you're doing is you're taking slices of what they do out >> and you know automating functions of that that when somebody who has maybe I'm going to be arbitrary here two years experience or three years of experience on in kind of a particular way of working inside of an agency can say oh I can have my I can do now what I had these people do by typing in some stuff and kind of getting hands it right now >> applying that to the larger enterprise and extrapolating that out very difficult. Why? Because inside the enterprise, you need to have repeatable motions and repeatable outcomes. If you've ever gone to an LLM and typed the same question in twice, you likely have

not gotten the same answer. Contextually, it may have sounded the same, maybe pretty close depending on the nature of your question, um, etc. But if you're asking it to, you know, research something or to, you know, kind of consider something, likely you haven't gotten exactly the same answer. That's problematic inside the enterprise often because you need to be able to run a process forward and know exactly how it worked. But if you're under ever under, I don't know, like an audit function or regulatory scenario or you're in one of those types of industry that's that is highly structured, you need to be able to know exactly why something happened when it happened. So you need to know well why did the machine LLM in this case you know give the answer of ABC you need to be able to kind of be able to trace

that all the way back and you can't do that with um you know those types of systems you can site they can have citations etc with that but you can't six months from now go back to like the you know the the query that you asked it and say well why did you say this And >> yeah, that's a good point. I think >> Yeah, it is kind of a mystery. I mean, people What's very interesting um like there's so many different contentious topics that'll even come up. And what I find very frustrating is when people will have like this like arguing contest between like, oh well, look what's like I have this list of facts, you have this list of facts, right? They'll like be arguing about a topic. Um mainly this stuff happens on Twitter, so this is just advice to stay off

Twitter. Um, but they'll use whether Chappie be GPT or Grock and they'll be like, "Well, see this is what it says about this." And I'm like, "Uh, but you like framed it two words differently and it gave a different answer." Um, or even the same answer is different, uh, or the answer is different with the same question. So, that's a that's a good point. And when it I think that probably begs the question though with with tools like what you guys are building. How do you make sure that processes are, you know, I mean it's replicable, right? and and solutions to problems that come up often are the same over and over again because I can imagine that being an a difficult thing when making things to to sell to companies and providing solutions to them when they see these different things that happen and

I understand we we we're talking about a generalized tool that you know versus what you guys are I'm sure is uh very specific and and well trained cuz you know catchy is just trying to reach the masses. Um I I don't know if you know it's it's it's obviously a different ballgame, but people probably still want to know how do you kind of keep them in line, so to speak. >> Well, like things like chat GPT, first off, it's a super powerful tool. It's great, you know, and so is Perplexity and so are the other, you know, kind of, you know, applications that are out there. Um and but you know, it's trained on the internet, right? And uh so therefore >> classic, >> you know, that's it's it's lossy, right? And so when you look inside the enterprise and we think about kind of

like, you know, the long-term knowledge that's inside the enterprise, you kind of actually have to train on that in order for it to be contextually relevant for the process that you're part of. And so in answer to your question of how do we ensure that one is that you're designing the AI as part of a process, not the process. um that you are working on, right? Or in um you have, you know, error correction and all that kind of good stuff in it, but you're also not applying it into uh you're you're applying it into scenarios where ambiguity is kind of expected and also accepted, right? So are you are we applying the AI that we use into you know the manufacturing of you be arbitrary here bolt bearings that go into uh you know jet engines. No, that's that's or you know rockets or

something, right? Like those things require an incredible amount of precision uh in their manufacturing process. And that's where you don't want to have variability. But you do you can accept variability when you're dealing with like well this is the you know the spot price of this commodity and we're buying it today and what's the risk you know relative to our you know our capital allocation or something like that. You can do things like that. You know, you can say, well, you know, Dimmitri is a a customer of ours today and he's been hitting our website all over the place. You know, what's the probability that he's going to become a customer and what's the probability that he's going to buy a lot, medium, or a little or none? And then you can take kind of actions based on that, right? And then use the output

of that is sort of the feedback into that model. >> Okay. Yeah, I think that makes sense. And and just to piggy back off what you said about it's trained on the internet. Um with the addition as well of web search capabilities >> with a lot of these tools, I don't think people are quite aware of the I think there's two different problems. Circular reasoning isn't the right phrase, but circular content one. It's It's basically pulling from I'll start with this. It's pulling from a list of publicly indexed pages that rank well. Those are primarily blogs for a number of SEO reasons I won't get into. But since those are the most easily and readily available things to get pulled from, what are people attempting to do in order to reach more people? They're making more blogs. And what are they using in order to

make blogs? they're using AI. >> So if there is an incorrect piece of information, it's essentially like a feedback a circular feedback loop of incorrect foundation built on incorrect foundation. That's the one thing. The second thing as well is then people will use that as a source of truth without checking and it's just very frustrating because they'll like if if I web search now on Google I'm sure you're familiar with the new AI overview and they still have the thing on the bottom that says you know it could be inaccurate and it often is inaccurate in a lot of different very specific questions and it's always about framing and that type of stuff. So, I I kind of missed the structured snippets um world that we used to live in where I would at least be able to read that and know whether it actually

answered the question correctly or not or was even answering the question I was asking. Um cuz sometimes it's simply not it just doesn't get the the context. So, I think that's something that is different to point out is like what you're dealing with is a a strict set of information that is demonstrabably true, right? rather than what could just be a list of things that weren't essentially accurate at first and then were pumped with more content that isn't actually that true all the time. So that's where some of the things come into place. >> Yeah. And I think you know just to follow on that I think just to say it a slightly different way what you're describing is you know where is topical authority come from >> and you know when you look at the the indexes and whether that's you know pick your

favorite LLM that's out there and you're using >> they are also backstopped by the traditional index engines whether that's Bing or Google or or or what have you right those things are also working in concert with the LLMs Um, and so that's that's how they kind of, you know, when you go into one, it can actually, well, here's the citations and where you go and if you want to navigate to it, right? Like it's not building a the same kind of um, you know, schematic map of the internet the way, you know, kind of traditional SEO is working. But when you get into, you know, GEO or AEO, whatever you want to call it, you know, it's looking for it's it's establishing topical authority on a different thing. What we're doing as well is we're recognizing inside the enterprise. It's like, okay, hey, there's actually

no greater authority of your customers than your CRM system and your marketing system and your support system because that's what they do, right? At the end of the day, why would I go out on the internet to find out if you know uh what what Dimmitri's products that he likes from us, right? You know, when I just have to kind of go into this system. And by the way, I can see, you know, 10,000 other ones that are very similar to you, right? And kind of see what's going on all at the same time. >> That's that's sort of I fundamental difference, but the common thing is is topical authority in between the two of them, right? >> It's a pretty good point. Yeah. Um topical authority. Yeah. Where does it come from? Fair enough. Uh okay. Well, well, you know, you talked a little

bit how right now we're seeing an impact on the marketing uh agencies, so to speak, with the different tasks getting automated and whatnot. Where do you think the constraint is right now on that not being the entire job from an associate level being taken out and how long do you think that would that constraint's going to take to kind of be removed? That's an interesting question, >> right? Because like it's it could always be compute, it could be um training time of the model, compute time in general because like if I'm not wrong, I'm forgetting the time frame, maybe it's every six months uh consecutive agent runtime doubles right now. There's like there's some interesting stats out there. Well, the the amount of uh just the sheer amount of compute that's being deployed in in North America and Europe and the rest of the world

is staggering. >> Yeah, absolutely. >> You know, and so >> the never mind everything that goes comes along with that, you know, that once you have comput >> there's that whole chestnut. Yeah. I'm talking about >> um >> but there's also you know software that has to be deployed on top of that and to make it work right and um you know if you kind of like look historically at you know the deployment of technology you know kind of first does come the hardware layer certainly right but then there's a multiple in the amount of software that that comes on top of that um and then services etc that kind of come along but back to your question about um you know what's the prediction around timeline before you could have a person, you know, have something autonomous do a complete job. I guess it

depends on how the definition of what that job is. Um, you know, if you were going back to your own previous experience of, you know, being an associate level first career job, you know, at a marketing agency, um, depending on the size of the agency, what you're doing, I'd argue probably a lot of that could be automated. Now, now could you automate that without having any supervision and not having any checks over that? No, you wouldn't you wouldn't do that. Um, so I think there's that still that that orchestration supervision management function that's not going to go away. I also think that if you were to play this forward and look at well what's the typical knowledge worker going to look like five years from now or 10 years from now etc. you know, you would say, well, they're actually going to be an individual

that has half a dozen, a dozen, you know, agents working with them or for them or under them that they are, you know, engaging with, right? That I think is is kind of like what it'll look like, the timeline around that. Hard to say because you're also you're also asking to predict on external factors like the economy and the environment and, you know, the political landscape and so on that influence what happens inside of an organization, right? Hm. Yeah, that's a good point. Um, you know, there's so many different I feel like there's so many different things that are happening. It's moving so quickly. There's so many questions people are asking. What is like the main I guess aspect of Agentic AI that A you're most intrigued with in the next uh calendar year and uh b you feel like your company is most poised

to take advantage of and uh bring to market. One of the most interesting things that we see around agentic AI is um you know how do how do we help people make better decisions. Um I think that you know certainly aspects of that are contained in the LLM models that are out there at least sort of that interface into them and and training them. Um and this kind of goes back to your to your question where you have kind of dueling answers. Well, okay. Are you just having people read dueling readouts or do they actually have a high level cognitive level of capability where they're like critically reasoning what they're seeing, right? If you surrender critical reasoning to what you're reading on a screen, we're all in trouble, right? At some level, right? And so, like I I I think that >> um you know,

you still have to think about what it is that you are seeing. if this allows you to get to the answer or at least closer to to it much faster. Um, so bringing it back to kind of what we see inside put it forward is the it's the stringing together of different very discreet agents inside of a process that can work together that involve humans um at critical points and that humans can also drive themselves. Um that's where we see kind of the things that are very interesting out there. And what does that translate to for the modern enterprise? Well, it uh translates into do more with less uh you know more efficiency, better higher quality decisions um you know and an improved way of working and then ultimately you know that translates into hopefully a better you know customer experience or better quality of

life. Let's just say that for all of us. That's where I think it ultimately goes to >> a better quality of life. That's uh I guess that would be the the goal for everybody, wouldn't it be, for for what we're doing with >> it would be. Well, why do we you know, someone invented the refrigerator at some point and said other one said, "Oh, yeah, that actually makes my life better, you know, that's a good thing to have." And and so on and so forth, right? Not to trivialize it, right? But it's it's just that um you know, this technology does have to have meaning and purpose. If it doesn't, then don't use it. Um, and I think the the world's full of examples where technology has been created without a a clear problem that can be applied to um that's a benefit to everyone

else, right? >> Yeah, good points. Is there anything that you would like to finally kind of plug or uh make a statement on before we end this out? I think we're getting to the end of the episode. I think that um when people are considering where to apply AI inside their enterprise, it requires careful thought um and and how to and what you're actually going to get out of it. I think that also if you don't have the skill sets around you know data and data intelligence and decision intelligence you need to acquire them. um surrendering surrendering this to you know vendors or to LLMs that are out there you know is potentially very dangerous um unless you kind of have a uh you know very strong you know acumen around kind of the technology and the problem space that you're solving for um inside

put it forward we try and make that as transparent as possible um everything inside the organization and the products and the platform is visible to the people that use it um it's not a mystery. You know exactly why you're getting the outcome. You can go forwards and backwards and in and have traceability around, you know, why a decision or a recommendation was made. And I think those things are extraordinarily important um inside the enterprise, you know, going forward, especially as we start relying more and more on these systems. Um that's kind of what I think about the the where to frame the conversation as it were if you were looking at it. No, absolutely. I think it's a fair warning. Um, delegating your uh your entire mind to what's on a screen. Probably not the most advisable uh thing to do. Uh, but I guess

since you know, for for the purposes of this episode, we're we're closing it out. And, you know, I'll plug what you're doing for yourself since you actually chose to gave advice instead of plugging, which I I give you I give you kudos for. Um, shout out to uh Mark. Thank you so much for coming. And please everyone check out putitforward.com. That's putitforward.com, notpayitforward.com, which I accidentally said in the first iteration of this recording. But anyways, thank you so much all of you for listening. If you like this episode, please leave it a review on Apple Podcast, Spotify, and the works. Thank you so much for watching. See you in the next one. Bye.