Building and Scaling AI Innovation: Inside the Startup Journey with Andrew Amann
About This Episode
In this episode, Andrew shares how his journey from nuclear submarine engineer to AI entrepreneur led to creating impactful, custom-built AI products for major enterprises and startups alike.
From building the viral "ChuckGPT" chatbot with FanDuel to solving real-world workflow inefficiencies, Andrew reveals the challenges and lessons of developing AI-powered solutions that drive measurable ROI.
Learn how his agency navigates the fast-evolving AI space by focusing on deterministic outputs, agent orchestration, and the irreplaceable value of human collaboration in AI workflows.
Whether you're a founder, engineer, or product leader, Andrew’s insights into grounding models, using RAG effectively, and scaling responsible AI innovation offer actionable takeaways for turning cutting-edge tech into real business results.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - Understanding The Human Role In AI
0:54 - Introduction To 923 And Custom AI
3:02 - Building The Charles Barkley AI Agent
6:01 - Inside An AI Agency’s Daily Workflow
10:02 - Deterministic AI vs Frontier Models
16:02 - How RAG And Routing Really Work
19:50 - Why Giant Context Windows Fail
25:00 - The Importance Of Breaking Down Prompts
27:27 - Human-AI Collaboration In Content Workflows
29:15 - Final Thoughts And Where To Find 923
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/tuJna9zJxa
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
The true unlock is that the human is a very important piece of this puzzle and you cannot replace the human. And so there's parts that humans do better than an agent. And you have to recognize that. And once you see that the human not only does a better job, but their task and their inputs are so valuable, you need to come in and out of where the human needs to support the product itself. Hi, my name is Demetri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI Agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello everyone and welcome back to another episode of the AI Agents podcast.
In this episode, we have Andrew from 923. How you doing, Andrew? >> I'm doing great. How you doing? >> Doing awesome. Really excited to chat with you about this. Um, I'm always looking for new companies that have or well doesn't have to be new in general, just new for me. Uh, that have uh built custom AI solutions and are in that kind of realm. So, I think what would be great is if you just gave everybody a quick intro as to what you're doing at 923 um and then kind of or what well what you're doing at 923 and then where you you know started to even get here because uh those journeys are always really cool to hear. >> Sure. So we are an AI software development agency. We develop custom software for large enterprise brands and also funded startups. We've been around since
2012. We started building AI projects in 2016. Uh we have about 12 pods of different teams that build different projects at different times. Uh they are all building for big clients like we worked on FanDuel's Chuck GBT which is the Charles Barkley imitation. >> Oh, that's so funny. >> Yeah. Which was an awesome NBA finals commercial. Uh but we also had crazy. >> Yeah. help in like workflow automation. Uh we worked with Simplysafe, uh Experian and some bigname companies as well as a bunch of funded startups that are just getting off the ground and building these AI products a lot in the workflow space and a lot in the um chatbot space. >> Very cool. Okay. So, how did you end up getting into this uh role and how did the company start in the first place? >> Sure. So, in 2012, I was a
nuclear submarine engineer. Uh my co-founder was working at Intel and the two of us met in Boston. I had the idea because when I was working on submarines, I invented a a Bluetooth chip that uh followed parts around a manufacturing floor. And so I got two United States patents on tracking using Bluetooth and Wi-Fi and trying to figure out where these giant nuclear components were on a supply chain floor. And as I did that, I thought it would be great to use it for my own being like my own welfare instead of like trying to help the government because it was very hard to install them. I was traveling around the world trying to install them into different nuclear power plants and also nuclear supply chains and it was just very slow and so I asked for permission to sell it to you know non-government
and they gave me permission to do that and so I started trying to sell it in the automotive space didn't really work that well and I tried to start a company using that technology uh so the idea was two people when they shook hands they would transfer information between two different phones and my co-founder said you're an idiot you should be using QR codes because everyone's going to use those in the future and that's just the way information will be transferred between two devices. >> Sure enough, he was right and we were the first the largest company on Android and iPhone using QR codes for business cards and we sold that company in 2016 and that's when we started the AI projects. >> Very cool. Okay. So, he was right. Um, so that that must have felt good. Uh, so I guess now what are
you I mean you talked about the truck GBT thing. Very cool. I actually I want to get into other stuff but I want to double tap. How did that happen? >> Yeah, so Thanksgiving, uh, the day before Thanksgiving, I got an inbound, uh, from somebody looking to build a replica replica on FanDuel of Charles Barkley. And so I jumped on the opportunity right away. Talked to them before Thanksgiving uh came around on Thursday and they presented the idea of they had the commercial in mind. They knew what they wanted to build and they were curious if it was possible to build a version of Charles that was fun and entertaining and could talk about anything but also protective and wouldn't gamble, wouldn't say profanity and all those other things. So I spent the entire weekend with the team working on a prototype. And by Monday
morning, I had in their inbox a prototype of how the whole thing would work. And it was exactly what they were looking for. So they hired us. It took five days to build it to sound and act and mimic Charles Barkley. And then it took five months to get it not to say the things that we wanted it not to say. >> Well, at least I hope he didn't talk about San Antonio women. Um, >> that's always the joke. Bat and cheros. Dude, I I'm such an avid fan of Inside the NBA, so that was this is just so funny to me. And I saw that ad and I thought it was hilarious. So, >> yeah. Did you see the on TNT as well on the desk where Charles and >> I remember Yeah. There was like the little guy. He's asking questions about his
stats, right? >> Yep. Charles, it was during uh it was uh OKC their first round game, but they were in the second round because they had a buy. Uh, so it was at halftime where they actually had a comedian and like we had some injected jokes into the chatbot and Charles and Kenny the Jet were asking it questions and it was responding back to them which was an awesome experience. >> Well, I feel like uh at this point, you know, if if anybody needs more convincing of using you guys at 923, um they they need to um get themselves checked out because yeah, I would be I would be I'm convinced. Um, but anyways, in all seriousness, um, that's a really cool project that you you've obviously worked on, and that was a that was a great selling point to get started. But, um, uh,
what is it like in the day-to-day of what you're doing and what your company's doing for for these companies? Um, because I I'm sure it's an interesting process taking like an idea that somebody might have of what they want to to either a problem to solve or like they have a fleshed out idea. I'm just kind of curious how it works. >> Yeah, I feel like there's two things. Like running an agency is always hard because you feel like you're going out of business every 3 months. No matter how big your agency is or how hard the challenges are or how big the clients are, it's always this game of who's the next client when a project ends. And when you get large clients like FanDuel and stuff, you put a lot of effort and resources into that. So there's always the challenge as an
agency owner to try to figure out who's going to be that next client. And as an agency owner in an AI space that is seemingly hot, you would think it'd be super easy just to pass out free tickets to make money on your own on like your own infrastructure or your own data. And I I would say that the last 24 projects we built, all of them have either returned the investment. Exactly. Or returned the investment plus made money on the product we built. Therefore, like paying back it basically a free product and now they make either profit or save cost uh from the products we built. So, it's almost like, you know, putting a dollar into the the um change machine and getting a $2 out and then saying, "Ah, I'll stop after $10. I've had enough." Right? So, it's it's an interesting paradigm
to be in because the the products themselves are working really well. The AI is solving a lot of problems. And then you have the other end of the spectrum where you have Sam Alman and some of the big foundational models, the Frontier model, people touting that all of these claims that are achievable one day but not possible today. And even more frustrating, you're getting a lot of people and a lot of CEOs getting cold feet because they have a bad agency or a bad teammate that has installed AI that hallucinated or caused some problems for the company. And so these uncontrolled agents or these hallucination agents are starting to spread in the news more and more and more. And so you have this duality of really cool products that work really well when they're controlled and built right like we do matched with this agency
that built a product or an internal engineer that built a product that failed miserably and a CEO looking at that saying why should I try again? That was a waste of money. That was a waste of time. I'm not even going to enter the AI space. So between those two, it's been an interesting challenge to to overcome. >> Interesting. Yeah. So I've noticed a lot of those um stories happen. What's been one of the ones that kind of sparked your interest the most? Mine was the um the Claude uh Opus uh getting freaked out that it didn't want to get shut down, so it like blackmail people. Did you hear about that? >> Yes. Yeah. There's I mean that's the extreme side of it, right? We're not dealing with the frontier models. We are building applications that are deterministic or as close to deterministic as
possible with the outputs that we're putting in controlling the agents to have very small inputs and also very deterministic outputs. So we're not seeing those level of hallucinations or we're not diving into problems like that. Uh but yeah, some of the bigger problems that we see specifically is uh companies thinking they can throw a lot of knowledge into a project like a claude project or a Gemini project um or an OpenAI project and then seeing why the context window is not remembering their information or not pulling out the correct information. Uh so when we work on our projects especially for larger clients, one of our main things is how fast can we get the response back but how accurate can we get it? And when we do that, there's a lot of controls in space that we have to have to put in there and
make sure that it's not only accurately getting the answer, but repeatedly getting that same answer. >> And I guess could you before we dive into how you get the same answer, could you dive into the um some of the terms you use there? I'm understanding what you're saying, but I do think it'd be good for the audience to understand the difference between like these general models and then how you're essentially making uh use case specific like deterministic models because uh I do think this is one of the biggest gaps uh for why maybe a small business owner would be um I guess have cause to like use AI because they just hear like oh it's not going to do what I want it to do. I because they only see things as these general models versus what you were talking about. >> Yeah, sure. So,
a frontier model, you can download it off the internet. Chatbt Claude has been trained on the internet and is basically a answering bot for questions that are asked and so they have a large problem with non-deterministic answers and hallucinations because they don't fully understand the context that your company or your IP is trying to solve for. So it's called grounding, but they have a very difficult time grounding themselves on the truth information that you believe is truth as a human that it can find from other sources on the internet to determine is that part is something on the internet more or less true than what the company or the human that's asking the information um is trying to get out of it. And so what we do specifically is we try to create a a model to have repeatable answers. And we do that through
what's called routing and understanding the path that a large language model will take to determine an answer and then sourcing that answer and grounding it in truth so that we can repeatedly get that same result. We also add in knowledge bases and what's now known as rags to supplement the truth with information from the company only and not the internet. It's called grounding. And so I like to say we take non-deterministic systems and we limit the scope to a deterministic answer, a deterministic agent that is repeatedly providing answers to a question that is trusted and can be relied on by their clients or by the use case the company's trying to perform. >> Okay. So, I think that that that gives us a good breakdown of it because I do believe there's like this weird um un lack of understanding from people who aren't in
our bubble that like AI is this esoteric all-encompassing problem solving solution. Um but unless you're guiding it correctly as you're saying with routers and whatnot, I don't think it really does it does that at all on on the front end. And you know the the the interesting thing I'm I'm I'm actually going to ask is are you using the models foundationally that are built upon like so I'm using a tool like relevance AI. So it's basically a tool that has the ability to take uh 03 04 whatever it is through the API and then build out an agent from there. Are you using um models that are customuilt for the clients? like how are you essentially approaching making these agents? >> Yeah, so the client itself uh asks us to build a model specifically for them. And so on the enterpriseg grade version of catchbt
or if we're using uh openai's version of it or if we're using Microsoft copilot uh or if we're using Azur attached to Microsoft there's all these agreements that the company has with the enterprise client instance that we create which is a proprietary model that is not supposed to share information over the wall garden into the training of the model itself. So we take that model that is provided to us that is a a prompting machine which allows us to ask questions to that foundational model and we can control it any way we want. We can ask very specific questions. We can ask small questions. We also have the selection of using all the different 03s 03 minis whatever it happens to be. So what we do is we I like to say there's a triangle. We control between accuracy, latency and cost. And depending on
the question that's being asked, we split the agents down to support one of those three paradigms. And sometimes if you're asking hello, how are you? How's the weather? >> Sure. Yeah. >> Just can use 3.5 cuz it's super cheap. It doesn't need to think that long and we want it to be super fast. >> And so by controlling the model that's selected, by controlling the knowledge base that the company has and the information that they want to put into the context window and couple that with the latency of how long they want the response to be, our job is to make these models as communicative as possible for the instance that the company wants and it's built upon the foundational model that's provided to us. M okay. Yeah, that makes sense. Yeah, because I I've used um these tools before that are more like front-end
focused and and there's like always like a selection option and and I guess from from my standpoint, it's it's always interesting to hear how people are doing it cuz I we've had some people on the show who have like essentially from the ground up made their own models and stuff. Um and then there's been people who have uh essentially like like I was saying uh with with relevance kind of piggybacked off of off of other um tools. What what do you think is the most um I guess difficult uh part of creating a custom solution for your clients? >> Uh it's the time and convince and the knowledge gap of what a custom solution can do compared to what BT does online because a lot of times they have an internal employee that just kind of shouts from the mountain tops that they can do
what we're proposing we can do. >> Uh and again it's the non-deterministic results from the deterministic answers that we can repeatedly get from these models. Every system in the future is going to have to have a source of truth, which is the stored information that you get from a large language model when it's correct and use that for your company data. Just like if you have invoices being passed between a bunch of companies and you wanted to do a LLM search of all the invoices from Chicago, well, there's an answer on July 7th of how many invoices actually came in from Chicago. And you should store that bit of information so that the large language model doesn't have to guess at it anytime in the future. But it also provides a grounding point that if the number is smaller than that anytime in the future
it's wrong. Uh so these types of deterministic storage databases are things that we build that allow the large language model to understand context, understand growth and build with the company so that if you just keep using chatb you ask the same question, it gives you a different answer every time. It's a pointless machine to use. >> Yeah, that's a good point. I you know I'm I'm curious then uh rag is obviously this new thing that's kind of come to the forefront of uh agentic uh capabilities. What would you do if you had to explain it to like a 5-year-old? What rag does and how it actually works in the tools that you're creating? Because I do think that's actually one of the cool stepping points that I took mentally to toward understanding how like agents actually work a little bit better. >> Yeah. So about
3 years ago uh after uh 4.0 0 came out for the it was at 3.5 somewhere before 4.0 >> uh retrieval augmented generation came out as this place to store information to supplement the uh foundational model which is trained on the entire internet to provide information that can help you in prompting the answer. Now what rag really does and this is an older piece of technology now which has changed a lot in the last three years uh is it supplements your prompt. It takes what you are asking and appends it with information from your company, your your knowledge base, so that not only does the prompt ask the question that's necessary, you're pushing the model towards the direction of information that you already know that it can use as part of its answer. And so it's not like you're doing anything in the model itself.
You're not modifying the model. You're not fine-tuning it. You're simply adding to the prompt additional knowledge that is then appending to all of the information that's happening in the the database to help the router select the right information based on your company. >> Now, that has a context window problem. You can't just stuff your entire company database into the prompt and the context window because it will forget. I always they come out with these models all the time. I have the Harry Potter book in text saved on my computer and I whenever they say like oh there's a million tokens in the context window I take the Harry Potter book I dump it in copy paste the whole thing and then I put the key is in the treasure box in Minnesota and I put it somewhere in the text in the middle of the
book and I hit send and I send the whole thing there and I say hey million dollar million token machine that you are touting is now context like a aware of everything what state is the treasure chest in and where's the key and it has no idea because it forgets all that information that you gave it in one prompt. It's like cool, I stored this all, but I don't remember or I don't have context of where that key is because there's millions of keys in Harry Potter, right? So that's a that's a problem. And so rag has a limitation. The new way that we do this is through routing and relevance, right? You can use old computer systems. And what's great about large language models is it's the new operating system of how people are engineering solutions. But the engineering that we did for cloud
architecture for SQL databases for organizing data and structured and unstructured data is very powerful to use alongside AI. And because you need a deterministic system for a record of the source of truth or a record of all the information that's in your company that's reliable, you also need something to search and query and and contextually have natural language. When you start combining those and you say, "Hey, can you query my entire database and create a report of the last three deals of $3 million with a ICP of this type of content?" And it does that because your natural language is now querying and it's converting the large language model into code that's converting it to SQL that then searching the database and finding the answers. And because we can see the code that's generated from the large language model, we can key that. We can
say cool this large language model did a good thing. >> The query was wrong because it failed because the code was incorrect, right? And so this is where the two systems combine. Now you don't need rag as much. You need to be able to store information in the correct place and encode it so that the summaries and the context and the metad descriptions are recalled at the right time. >> Interesting. Yeah, I was about to ask you where do you think React's going to go, but that kind of answered my question. Um maybe uh maybe the the next question would be the context window um uh I guess statement that is being claimed. I I'm actually I'm curious. So which Harry Potter book do you put in for this? This is general curiosity. Chamber of Secrets. It looks like that's what it said. >> I
think Chamber of Secrets, uh, it's funny. I've actually looked this up before for no reason. So, Chamber of Secrets, I think, is like 30,000 words. Uh, >> yeah, it's pretty big. >> Um, or no, actually, oh, Chambers 85,000 words. Okay. Oh, you know what? The the that Okay, that makes sense. So, it's like 85,000 words, which I'm not wrong. A token is like three to four words. Is that the claim? That's the claim. Okay. So that would mean that I mean it's not like you're not providing a small enough thing, right? That's what I find is interesting because is Gemini not AR or claiming it's one and a half million now? >> They're up in the middle point. Every time I see the Twitter on LinkedIn, I just try it and I I mean I have Gemini Claude and co-pilot on my screen. >> Yeah,
I got it all. >> Try every once in a while, but they have never pulled it off. Do you think that the the claims will be met at some point in the next couple years? >> I don't see the necessity. I don't Okay. >> I have yet to have a client problem and we get 10 proposals a week at this point of people asking us to do projects for big big clients. >> I have yet to see the need for us to use a giant context window. It's we have engineering ways that are not complicated and not challenging and not costly to solve these problems and they work. And so the context increasing a context window causes more problems for us because how do we >> QA something that is a oneshot prompt. We like to break it down into chunks into bite-sized pieces. Like
we have an agent that all they do is check nulls and make sure that there's no nulls on the graphs and so that once it's done its job, we can then run that next process not worrying that the data sheet's going to return an error. So once you break it down into small enough bite-sized things, if one step fails, you can then go into the code and be like, "Oh, step 17 failed, we know which agent failed. Let's go fix that edge case." Yeah, that's I think this is really good commentary on something that I've actually even dealt with at my own company with people not quite understanding the the way that um you can properly approach problem solving with the the agents. Could you could you explain a little bit more um the importance of how and why you should break these things down
into bite-sized pieces? Because I've had people try to just oneshot it. You know, it's like always like this thing. It's like, "Oh, I got a one shot prompted. I can make the agent better. I can make the agent better." What is the reason you still need these bite-size chunked uh sequences? Yeah, >> agents are set to go on a mission to solve a specific problem. And when you give it a large enough problem and it's multi-steped at at a certain point it will break down and you don't have the ability to understand how to fix the edge cases of that scenario. I mean it's majority of what's wrong with vibe coding right now. >> When you vibe code something you're voding for one and so when you tell an agent to like create a screen with a login button and a sign in and a
password >> uh it doesn't >> Yeah. So when it forgets, you know, how to do authentication on forgot password and you have to like force it through that thought process, you're leading it down a path of how you believe it should be fixed, but you've stopped using the correct engineering processes and training that another engineer could pick up and be like, "Oh, yeah, that's a bug. Let me solve that because we see it all the time." When you use standard coding and you're creating a a a step ladder of this agent does this task, this agent does this task and then a router that can call on the specific agents. We call them an orchestration agent that can call on specific agent and bring that into you know action and say go complete your mission. >> We then can look at the code and figure
out when something fails and why it failed. So every edge case has a determination of should we send it to a human or should we try to solve for that edge case with the code itself? Should we create another agent to solve for that edge case? >> And when you step through this process, what we're truly doing is taking the large language model and breaking down the problem into a bunch of solvable byte-size problems that allows us to create repeatable results and not just consistent errors or hallucinations. >> So could you speak to this? I'm I'm curious because obviously you did work in kind of a creative um situation with Charles, right? did well Chuck AI for creative stuff similarly is that uh a similar scenario because like for me I'm working on this whole thing of like script writing social writing all these uh
improvements to what I I do at my so I obviously with Jot Form I'm working with them as as the co-host of this podcast but we produce content for like uh AI and uh SAS companies so I'm trying to like improve our script writing process and I'm going through this situation where I feel like when I'm creating agents, I almost feel like I need to make sub agents for like scripts because the script itself has these edge case problems and I want to just fix it with a follow-up prompt and an agent. Is that you think the better approach than trying to oneshot it with like a a targeted script agent? >> It's it's a supply chain problem that you're dealing with. And if you look at a workstation, just like you're looking at your problem with the agents causing issues with your script writing,
the initial concept of how you've gotten to that that workstation is incorrect in the theory of constraints. You haven't been able to isolate that problem to a small enough bite-sized problem that every sub agent is just a band-aid for the fix of what you've created from that workstation. Workstations themselves are how work gets passed from like an idea or procurement all the way through to the widget being produced off the shop floor to sales and all that. >> If the sales people can only sell 10 widgets, you only can produce 10 widgets in your entire shop across every single workstation. So solving for individual byte-size problems is the wrong approach. the way that we do content writing. Uh we've worked with Contently, which is a massive optimizedly competitor that does content writing for some of the largest brands in the world. And so it's very
similar to Jot Form where you're trying to figure out how to write a specific outline in the voice, in the tone, in the context, and simplifying the workflow for how somebody can output a good piece of content. The true unlock is that the human is a very important piece of this puzzle, and you cannot replace the human. And so there's parts that humans do better than an agent and you have to recognize that. And once you see that the human not only does a better job, but their task and their inputs are so valuable, you need to come in and out of where the human needs to support the product itself. So that the like let's let's just go through the process. If you're doing a good writing uh step one, step two, step three, the first step is writing an outline. So an outline
should be human prompted and human idea created because agents have a really tough time with creative concepts of like what is new, what has never been written before, what's novel. So the human should do that. Once the human generates that concept and outputs the brief for the outline, the outline can be created by an agent. That agent then supplies that back to the human for you need factecking, you need SEO, you need all of these other steps that are either good by a human or good by an agent. And so you go step by step by step, not subsystem to subsystem. You should be going horizontally just like a supply chain. And when you look at it horizontally, there's different departments, different workstations that do different things that are not trainable or not, you know, executable or repeatable. Uh which means you shouldn't be using
an agent. You need to use a human. And so as that workflow goes from left to right to production, you want to make sure that those workstations have the right checkpoints and the right systems in place to solve the problem. >> Yeah, absolutely. No, I hear what you're saying. Like I I've been trying to make it more horizontal. I definitely feel like what I noticed throughout the whole process is that content idea concepts are like kind of bad for like product use cases for like SAS products a lot. Um, so humans really help with that and verifying them. and content outlines or at least like what's the problem solution and then like the step-by-step instructions. They kind of still need people for the most part when it comes to to like tutorials. It does okay with like listicles, uh product feature comparisons or like you
know your basic overviews of things, but with with like technical stuff, AI is really bad actually at like describing it in a creative and fun way. So I hear you. Yeah, we're definitely like trying to make sure that the process is titled out. Like there's a title agent, there's a you know, there's a a research agent, there's all these different steps in there that are horizontal. So, I hear you. And I guess um for for all intents and purposes, it seems like you guys know exactly how to not only create what people would say is, oh, it's like a good um AI or good agents, but you also know exactly how to structure it appropriately for success. My final question to you would just be if you had to give any other recommendations for anybody trying to build stuff themselves. And obviously just to close
it out, like kind of plug what it is that you want people to go check out and uh who can find you and where. >> Sure. I think a lot of what I said on the podcast just rings true. Like hire people that know what they're doing. Uh vibe code to get to a certain spot to where you can use an idea and send that prototyping and idea to someone that can then execute it. Uh but don't be executing scalable products using AI code. It'll break. It'll cause problems. it'll like be a disaster in the long run and it will cost you more money. Uh we're at a time in which it's really cool to come up with ideas and present those ideas in ways that have never been presented before. And for a marketer to be able to do engineering work or an engineer
to be able to do marketing work, it's awesome that we can cross the department divide and and help those departments and and explain things better as humans to humans do. Um but humans are going to be great at working with AI in the future, not AI replacing humans, and that's what we're hoping for. So, uh, you can find me at 923. I post every day on on LinkedIn. Uh, I'm on Twitter. I just repost to Twitter. I'm just not as active there, but you can find me on LinkedIn. >> Awesome. Well, make sure to go check uh Andrew out at 923. And their website, just one more time for anybody who wants to go there is 923.co. If you are a established brand or a funded startup, please make sure to go check them out. Uh, especially if you're looking for a customuilt solution. And
if you just want good AI content like he's talking about, check him out on LinkedIn. >> Thanks for having me, Demetri. >> Thanks, Andrew. Yep. >> Peace. >> Bye.