Challenges and Opportunities in Enterprise AI Answer Agents with Yana Tornoe (Question Base)
About This Episode
Yana shares insights from over a decade of building productivity tools, highlighting the challenges enterprises face in converting outdated communication and scattered documentation into verified, reliable answers through AI-powered knowledge systems.
We dive into how Question Base integrates with platforms like Slack, Google Drive, and Notion to centralize and enhance knowledge sharing while maintaining accuracy and compliance, especially crucial in sectors like education, medtech, and fintech.
Yana also discusses the evolving role of human oversight in AI workflows, emphasizing the need for AI agents to be managed just like any team member to ensure trustworthy and scalable results.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - Why AI Management Skills Matter
1:00 - Meet Yana Tornoe Of QuestionBase
2:25 - Building QuestionBase From Real Problems
6:50 - Integrating Slack, Notion And Company Tools
10:00 - Challenges With Enterprise AI Accuracy
14:00 - Industries Adopting AI Agents
22:00 - Impact Of LLM Improvement On Products
29:00 - Evolving UX In AI Knowledge Management
38:00 - Documenting Knowledge In An AI World
41:00 - AI’s Future Impact On The Job Market
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/p2yKOn9Z3b
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
Whoever doesn't adapt it or is not willing to become that management level of of the AI is going to have a hard time and pretty much any industry whether [music and singing] it's manufacturing whether it's service industries whether it's knowledge [music and singing] based workers I really think that this is such a transformative technology that is going to impact each and every [music] aspect of our lives by now. Hi, my name is Demetri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI Agents podcast [music] brought to you by Jot Form and featuring our very own CEO and founder, Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello and welcome back to another episode of the AI Agents Podcast. In this episode, we have
Jana Touro, the co-founder and COO of Questionbase. How you doing today, Jana? >> I'm great. Thanks so much for having me. >> Yes. Thanks for coming on the show. I'm glad you're calling in uh from beautiful Miami. Um we're just chatting free call about how sunny as always. Yeah. Um it's 40° today in Chicago, so I >> know I shouldn't be a jealous person, but sometimes I am. But outside of weather, let's talk a little bit about uh question base. Let's talk about you. um tell us a little bit about how you got into uh the world of of AI and AI agents to and you know the little bit of a background on brilliant on um building the brilliant bot that is uh questionbased. >> Uh thank you so much. Uh well accidentally absolutely accidentally it was not intentional to get into AI. Uh
but as they saying like all paths connect once you look back in time and me and my co-founder started building productivity software roughly 12 years ago. So it makes it sound really old. Uh with our first company we were building also tools in the same space. Uh we had a product for task management. We had a product for organizing the work of teams, a more collaborative workspace with notes and communication needs essentially trying to create a vision that now a decade later Slack is achieving with all their new releases around um canvases hurdles on the side of uh chat and now you have the agents right there plugged in with access to a CRM and all these things. So it's really fun for me to see the evolution of the whole space after my whole career being here. And the first um the first years
of um building question base were actually not dedicated to AI at all. It was dedicated to solving a very real problem that I was experiencing at a job that I had between the two companies. >> Okay. Nice. And you know, I think it's it's fair to ask, you know, when did you start seeing the indicators that there was going to be some adjustment to needing to get into to AI and and what what was that kind of like cuz you know, I feel like every company who has now become an AI company that wasn't intending on being a agent >> um or an actual agent uh you know, has their own intriguing like markers they saw in the in the market. So what we were doing with the first version of our companies, it was actually before chat GP released in January 2023, we were
already for a year and a half working with company data and organizing it in a very manual way. And we were hearing from more and more builders in the space, not from customers, but like builders, founders that are close to us in our network that um they're using this new technology available through OpenAI and you could just um uh skip some of the steps in the manual workflow by using the AI for them. One of the things um the first use cases was of course um AI powered search and we spent like a good portion of a year explaining to our customers that now they don't need to write keyword by keyword they can just express it in the in the whatever phrase it comes to their mind and they I will match it to result from their data and be able to surface answers.
So just for context, I forgot to tell the listeners what is question based question [laughter] organize company data with um with the use of AI to respond to employees questions that can be product questions that could be operational questions or sometimes it's even uh questions very practical from the sales team somebody that is um early in their career trying to leverage the practical [snorts] experience of uh some of their colleagues that are more advanced. So it is a very good use case for artificial intelligence which of course we didn't know when we started. We just knew that communication is a great source of um transferring this tacit knowledge in organization and we were wondering why even big organizations or corporations um have such a huge gap between what are people what is stuck in people's head and what is out there as available documentation. And
we saw that problem by being us as employees in these big organizations um being on the receiving end of that documentation having the hard job to uh find that information and we were wondering why is everybody were asking in the chat like why is nobody actually using that knowledge that is out there? Why is that knowledge out of date? And this was really where we started um our endeavor in that space. It was like, hey, how can we effectively sift through communicational data to extract knowledge and connect that knowledge back to people? And of course, that became an incredible use case for AI, way better than the manual workflow of a person that needs to remember to save every question they were asked or every answer they give. >> Yeah. Hey, you know, it's it's such a natural uh tool, I guess. You know, like
there's so many um so many different things that we communicate now over Slack. Um mainly, I guess, is the is the is the tool that I use in a day-to-day basis. I know other teams use other products, you know, whether it be I guess Teams or I feel like it's only Teams, honestly. Yeah, I feel like it's only Teams, but Slack's like >> these are the two big ones, right? >> Yeah. Yeah, I feel like but you know Slack is is the main one that really does a good job of connecting um with other capabil other tools, other capabilities. Tell me a little bit about how um uh you as a tool obviously can reference the rest of Slack, right? But also integrate with other tools. I see on on the site like you have the ability to integrate with tools like Google Drive and
and Notion and Confluence and these are a lot of different tools that people use consistently. So, how does your product kind of factor in um when you're asking a question, what to look at based off how the question is asked, how does it kind of work with more than one tool at once? >> Great question. And it's really something that is uh being developed right now like what are these guard rails you can uh create for AI agents to perform better because uh we've been in the wild wild west for the past couple of years with AI. everybody imagining that they can just plug in any set of data and magically AI it's going to return uh great great answers like I think that the fact that um the primary experience that the first experience really that people had with that technology was on their
personal use case like leveraging the internet for their personal uh curiosity created that expectation that you why wouldn't you be able to do the same for your work information. And it became like this huge gap between UX and expected UX and what is actually able to be achieved um due to a number of problems but primarily due to the poor quality of enterprise data. Because if you are um the the models large language models that are available for the general uh user are of course um trained on millions of data points for the same thing like the factchecking uh can occur due to the just the vast amount of validation you can run on these data sets. Whereas in an enterprise like really when you hit a hit a limitation of 12 duplicates of a document uh version 2.3.7 of a document, how do you
compare which one was the latest or which one was more accurate? And uh does it take into account the time stamp? Does it take into account who last edited and their role in the organization or expertise to that subject? So really AI didn't have any of these uh built-in tools at that moment to make that judgment. So what it really did was it made really good believable answers on enterprise data that misled uh the employees uh to take actions and u maybe pass them on to their customers or maybe initiate an internal workflow. So when it comes to this enterprise data, we are starting to achieve a certain level of maturity and this is what uh we specialize with with questionbased because the initial versions that we released was really building on the momentum of the technology that was out there. basically connecting your Slack
history like your um your other tools of information and just believing we were also oblivious as anybody else out there how accurate the answers will be. And um and when you crunch uh 10 years of company history of conversations in Slack that nobody ever uh kept up to date because of course the the way chat is built it's chronological conversation there is no uh going back in time maintaining accuracy up to date with new details. So it suddenly will generate maybe of 10 years of history will generate 4,000 FAQs that are applicable to the company. They contain knowledge but now the problem is that that knowledge is of very poor quality and accuracy and trustability of that knowledge is very low. So even if you utilize AI to pass it on to people, it becomes uh very speedy instant answers to something people shouldn't have
uh gotten as an answer at the first place. So in the past uh years we were uh and I think that even today like we are the ones that are solely focused on building on top of conversational data, building all these guard guard rails and systems that are going to increase the accuracy of information and uh put in places uh verification model. Put in place um [snorts] guardrails per use case, per channel of communication. um select specific data that would um like that document responds in that channel and nothing else gets leaked into that channel or these people are responsible from the organization to verify any information before it reaches the end user. So all these small tools that we have built to actually make sure that we utilize AI to pass on instant answer but an answer is verified and accurate. Um it's been
it's been quite a big challenge and to your question that that you asked like how are we integrating all these data? Well, it's hard like you need to you need to actually fail first um before you can build a solution that can be robust enough for an enterprise. So maybe initially you start with integrating a full folder of a Google drive and the company thinks oh our all our SOPs are there because people let's be honest they are also oblivious as what is in these 10 years old folders people always are overestimating the quality of their data. So they're like oh let's just connect it and we're all set. But then you look at the automation rate. You look at the actual questions that are being asked within a channel and you see a huge mismatch between what the agent has learned from that uh
documentation and what are actually people asking about and then the automation rate is down to maybe 10 20% in a channel and people are like oh why is it not answering more? Because you don't have the documentation to cover the practical questions that people actually need answers to. >> Yeah. No, that's that's that's very uh important and you know just just out of curiosity I think it's another fair question followup. what uh kind of industries are you seeing um are struggling with, you know, this the most? And then what what ones have you been able to to kind of help um get to um the closer to the ideal of the uh I guess expectations of what people would like? And um we'll talk a little bit more about expectations versus reality and where you think it's going to go for your product, other products
in the future. But just to get us started, what what companies do you feel like are really trying to adopt um products like yourselves? >> The industries where uh right now they're heavily dependent on manual work and on deploying humans to the problem. >> And this will be industries where we now work only with the internal part of the problem, not the customerf facing questions but like internally unblocking. And this is this is HRs, this is product managers, this is people um in operations or training and enablement who are stuck instead of producing high quality information and being that subject matter they are stuck in repeating themselves in staying on top of the busy work. So for them the case of connecting automatically people to information and having that team player that is um an assistant an AI assistant or a agent as we call
them is much of really high value. The industries are um industries with high compliance typically where they really want the they need to they're pushed let's say on the market conditions right now that are being tough on some industries. they're being pushed to perform much more efficiently to achieve um um better return on investment yet they are uh in a situation where they just cannot um benefit from generative AI uh because it's highly risky for them. So these are companies like in education, in um medtech, in fintech, companies where you need the AI assisted work in order to speed up processes, but you need the answers to be humanly verified. So there is awareness definitely in this highly regulated market because there is a general push for automation from the organization and bigger stakeholders in a in companies of this size yet um they are
of high need of guardrailing the performance of these agents. >> Yeah. Yeah. Okay. That that that makes sense. And you know, um [snorts] I'm not quite sure what your product does in this regard, but we've heard a lot about it, um recently, especially with integrations and stuff. Has your product kind of um in the back end done any sort of integration with the MCP servers that are coming out for the respective products that you're integrating with? I'm curious because that's usually a common like or it's been a recent trend that people have started to implement. >> Recent trend. Uh there is uh just a few weeks ago uh Slack themselves introduced the MCPS. We've been partner of Slack for the past uh four years leveraging um certain level of their u [clears throat] certain level of their history of a team and being able to
uh work with the that channel data in order to organize it and derive the uh FAQs from that channel. And really then CP uh will allow us to search it more in real time, but it's not a priority of ours because of the way the system is built that once you install the agent into a channel, you access this 90 days FAQ. really if a question wasn't asked repetitively within these 90 days, it's likely not going to come up or next time that it comes up, you wouldn't be necessarily annoyed uh to answer it uh from the from scratch. So we do already organize the information that comes from these 90 days and I think that if we connect to the MCP uh right now we're going to have a hard time controlling and guard railing the the data compared to the current model that
we deploy which really is our competitive advantage because all of our generated um FAQs from these uh first filter by the AI are then humanly verified by experts on the team and if you are doing instead real time search in the history then you're kind of skipping that verification layer and really the tools that are out there right now the OM don't give you that verification tooling it's also unclear for the industry how are we going to uh UX such a such an experience on vast um vast sets of data Yeah. [snorts] No, I think that that that's a good point. And is it kind of difficult um in what you're doing um to I'm sure with a lot of FAQs and um kind of answers to questions is it kind of diffult and how does your product move around this? uh when answers to
questions I feel like might change if say some somebody like adjusts a process or like a I don't know maybe even at a project level if someone says it needs to go one way and then it changes how does like the product account for that kind of uh adjustment that naturally happens when like >> it should be a static answer maybe but it change yeah >> exactly so in that case like there are uh we have tools in change for duplicate checking. So if there is an issue a new answer that contradicts what was already saved in the verified answers, you're issued a warning that um uh makes you aware that you're creating a duplicate and maybe uh it suggests you to edit the original answer. So in that way it becomes a very smart uh way of doing something that is historically really broken
with documentation which is how do you keep it up to date? it goes really especially in fast growing organizations or companies with a lot of new data getting integrated in them it's a nightmare like how do you keep that up to date how do you even keep the overview of what's out there and integrate the new knowledge into it >> so and [snorts] AI it's incredible at that it's in incredible at detecting similar information bringing it and then it's an UX problem of who do you bring it to the attention to how do you integrate the nuance answer into the old answer. Do you just erase it? Do you rewrite it and improve it? like all these tools are something that um we use the human in the loop for and uh really use the AI to be assistant and um not in any case
uh replacement of the experts on the team but really data enhancement that makes knowledge experts in uh big large enterprises perform to the best of their ability and really work strategically. [snorts] >> Yeah. Yeah. No, that's that's a good that's that's interesting. And um it's kind of hard to know this uh from an outside standpoint looking you you made a comment in there about how the models are getting really good at understanding similar context um and and how it adjusts there. So like how have you noticed in the last calendar year even that it's managed to be more and more accurate at finding those similarities and making the adjustments because um you know I've seen a graph it's kind of crazy it showcases the quote IQ of like uh the models over the last year and reasoning only really came out I'm trying to remember
when 03 was released uh it was probably in like April so It's only been 7 months, so reasoning's only been out for a while. And obviously, it could be at the smaller model level. How have you noticed and how much more do you think it'll improve your product models improving, right? The the reasoning and the >> significant. Yeah, >> this is such it's really good topic that you're you're bringing up because it is significant change and honestly even until now um change from one um one version of a model to the next can make drastic differences and not always for the best. Of course, if we look at the overall performance, AI has become much better just like the out ofthe-box solution without the enterprise data training on it has become much better at uh judging reasoning what would be the right answer. So even
if you feed it um your drive folders and your notion notes and your slack um history of the channels, it um it starts taking much more of the main data into um into uh account. It starts having better judgment and understanding understanding the context of the question to match it to a better answer. So by all means like it's night and day difference just in this past year. Um the prompting that you need to do on the out ofthe-box solution is much less because it is that much better. So if we were we are taking a very different stand to our competitors who are enterprise AI search essentially tapping on top of all your data sources to generate an answer. um for them it's even bigger game because before it's creating some mambo jumbo answer now it's creating a very believable answer and as accuracy
of the data improves then you're going to with the improvements of the OMS you're actually getting closer to the right answer but what we figure out within our industry and working with really sensitive data for enterprises is like you cannot not give an answer believable good answer like there is only one right answer and that's the vast difference when you're working with consumer LMS um in chip and there are many ways of um hey like I really want to lose this five pounds like how to go about it there are billions of ways to go about it but in most organizations when you ask a particular question am I allowed to issue this um refund For that tier customer, there is just one right answer and that's it. And connecting people to that one right answer still requires that layer of verification. And for us
uh the models make a difference uh in a very easy to experience way because we do still generate the AI generated answer across your tools. These are not answers we can verify easily or um pass on through experts before we pass them on to the end user. And in this case um from one model to another it's going to give a answer that is uh very vague. For example, it's going to give a statement that is uh vaguely answering uh around the topic but not deterministically. And then you see that oh okay like some of the system prompts were changed in that model that led it to be less uh less confident in it its way of performing and then you have to prompt it in your own way to make it either stick to the instructions like for example in our case like for
some of the high compliance customers if they already have documentation that um they are required to every word be sitated it exactly as it is. So then we have our AI on top of it. It does it cannot summarize the answer that that's in the document, right? That's just not an option because there are so many nuances to the language that it might get wrong that somebody might quote towards a customer or a partner and the company will be liable. So then we need to prompt our uh model to be sticking completely to just quoting word by word what is in the document. So there are changes like that that we vividly experience even to date. Overall the models are way better and I believe that uh through your conversations with competitors who do purely enterprise search across data they are definitely experiencing vast changes.
And and what do you think like as a uh final I guess uh ultimate look of what the product would you user experience that would be like you said there's a difference between previously now you see like a big improvement based on the models what would be the um main things still that the models could help improve so you can more reach uh the absolute perfect outcome I guess um in the next let's say >> yeses Yeah. Couple year or so. Yeah. >> The more we're bringing, if you just think about where we were a year ago and like displaying the reasoning of the agents like that was that's a huge visibility thing. um for just seeing like I'm displaying for each and every question that the agent gave where did it where did it search which documentation maybe even start putting check marks like
did it comply with its guard rails like I have for that channel guard rails to only look into these documents did it comply with them so make it make it put this um various tools for the agents to um double check on themselves to essentially be managers of themselves to make sure that they're passing on accurate information a little bit like when you're having the research agents and they research and then at some moment they issue kind of a summary but then they are like oh wait a second I think I missed this source let me go back and integrate it. So kind of having that loop of them verifying their process of going about answering. First step is of course displaying that reasoning. Second step will be adding humans in the loop in a very natural and user experience um nailed way where it's
going to it's going to shift essentially the role of experts on the team from being the one that provide assistance being the one that provide knowledge to the agents that provide assistance and we can see that shift in the roles um very vividly because once a company comes to us and let's say they have the budget they have um they have the problem but they don't have a clear uh person to take charge of the pro the agent and it they are imagining like plug and play oh I just plug my sources yeah yeah yeah we have this documentation in drive and it's not a problem let's just integrate it let's do the ingestion cost and whatever and it's going to answer employees then you know that that pilot is not going to work like they're not going to get the success they want because
AI as of now to work on enterp price level you need to have that feedback loop where you monitor as I was telling you like okay it elevate escalated these questions to a human because you didn't know then the answer why didn't it know it now go and inspect do we not have that knowledge in the organization or does the agent not have access to it or okay that's third category this is completely new knowledge what do we do with it is it worthy to be integrated into the knowledge base or should we just dismiss it so there are so many of these small components where actually humans need to interfere with the performance of the agent to really empower it to be able to take over the bigger task of responding these more complex uh organizational questions. >> Yeah. And what kind of level
of comfortability do you feel like companies are are are having with um cuz what the interesting thing is I feel like at a especially at an enterprise level they want to make sure of the verifiability and the accuracy and all that kind of stuff of of answers and the human loop is is really important for companies right now. It feels like it's a it's a buzzword um that uh is being used commonly. What kind of level of comfortability do you feel like people have trustwise with, you know, the answers that they're getting? And how would you say, just to further ask more about your own company, what you think makes yours different, whether it's in regards to allowing for that human in the loop or um having better reliability than what you've seen from other competitors. Thank you. These are big questions for a topic
that is evolving like as we are building as more builders are also like >> through your invitation today on the podcast sharing their knowledge. I think >> everybody is maturing like we are at the forefront of something that wasn't available to any of us before. So we are trying to >> all figure it out and chip in ideas with each other. Um but really there is a definitely a mismatch right now or um maturity between what builders know as best process as what would take to get a company to success and successful deployment of a agent. Let's say just an answer agent like our use case um to what the company expects. Why? Because first of all they already have the hires. the hires are doing the the job whether that's responding manually or they are documenting things manually or they are um they're not
even doing these things. Maybe they have a completely different job and this is kind of like their second, third, fourth priority in their on top of their work responsibilities. And one one of the examples is like documentation. Well, documentation even as of right now, hiring for a role of knowledge manager, product manager requires documentation that is built for humans. It still takes on the old school assumption of human reading it. >> And I push back and I I've had recent meetings with um some competitors of ours just as sharing like uh sharing ideas, sharing knowledge very openly. Uh people are as I said in that industry we're all trying to figure out how things should work and there is definitely not just one way I'm pitching here on the on the podcast my way because this is what we've experienced and um gotten to a
solution but there are many other ways and I challenge for example the solutions where there is knowledge base connected to an answer agent because the knowledge base and you're selling that package to a to an enterprise and say, "Hey, like here is how you're going to write." And by now I'm like, "Why write it? This who writes that documentation? Like why write it? Who should write it? Nobody did it before when this was the only way to document things. And why even write it? Who's going to read it?" Because let's we have on the reading side, on the consumption of data side, we are much more advanced in our agentic capabilities. So connecting people to information through a simple answer is much more evolved than the writing capability. So then it becomes like oh even if you write it it's not like I'll ever read
it. I'll just ask my assistant for the portion of the information that I need. So it becomes um it becomes an interesting space to navigate especially as the maturity of the uh builders and the clients is not at the same level. So you have to start by understanding the actual state of things in their organization and maybe their expectations are up here of what AI is going to do for them and the automation rates they're going to achieve out of the box and you have to kind of build bring them down and then help them actually align the expectation that hey you need to have somebody on your team that's going to have the role of managing that agent. If you don't have that and you don't hire for that role, you're gonna have a hard time getting success out of this automation. And there
are all these things where essentially everything needs to be reworked. the way of writing, the way of accessing information, the hires and talents on the team. All these are very intertwined experiences in an organization that are right now getting drastically and radically um changed through AI and you need to build these glue solutions, tape them together based on what the way people work right now. But it's a real challenge and in our experience it is a real challenge. It's a real challenge to talk to a knowledge manager that praises themselves of writing amazing articles in uh Confluence and confront them with the reality that nobody ever reads them. And you just have to tell them, okay, let's go on the views. Let's go and open this article that you wrote a week ago. You promoted it in Slack. Let's open and see how many people
have viewed it and see the remorse on their face when they very they realize nobody read it and [laughter] then talk about that because there is a certain um certain pride in creating good work. It's just really hard to confront people that that work no longer needs to take the same shape and form and for them to actually excel at this new reality, they need to work through this middle management layer of an agent. >> Yeah. No, that's a very good point. And I really appreciate you being uh candid there cuz I do think it's pretty true. Like there's a lot of documents people write, nobody reads them, right? Um I think the larger you get organizationally, the more people are trying to get organized. I think at a smaller scale, you know, SOPs maybe are more well like followed and and upkept and then
you get larger and maybe they're even upkept, but them being followed is kind of uh followed or even read in instruments is a smaller butt. Um, no I think that's a very good point. >> I I really want to add something here and interrupt you because like it's a lot on my mind this aspect of um enterprise knowledge where who are we creating this for [snorts] right now. I was at attending Dreamforce last week um through our partnership with Slack and Salesforce and it was a very insightful conference. a lot of a lot of amazing content on the topic of the agendic future. And one of the things that was used as an example was how now you can utilize this feature in Slack to create this really well doumented uh procedures right there in the context of communication. And it was um it was
um demonstrated how you can chat with the assistant right there in Slack to create that asset. And what you and me are doing right now, we are asset generating. I mean it's just a conversation but from conversation to asset generation in the age of AI it's very um very little very little leap >> uh very small one. So you actually can create like okay that's a conversational data but I can take the transcription of it I can um I can take it into a report I can and I can write a paper white paper on knowledge management in enterprise I can write use cases uh I can do it per industry like I it's all about my fantasy about the prompt to generate the asset I want. So as we are exploding in the amount of conversational knowledge whether it's through conversation in a written
form or in a spoken form we are able to then create these assets really easily yet again who's going to read them. So even in the in the demo of an agent u based on um a few different conversations with the person generating this beautiful canvas with details on a deal uh recent metrics taking into account the transcription um to kind of track the performance of the deal or how it moves and so on. It's again like that's a very beautiful asset definitely very useful but for whom for whom are you creating that asset? Do you think a human will go and read that asset like how would we going to consume that content and really is documents in the middle um a necessary step because if we have conversation from conversation to an asset and from asset to conversation this is how I'm going
to consume it why create the asset why isn't it from conversation to a conversation um with some guard rails in between that can create like this verification layer and so on. So that's a lot of the things I'm pondering on and I really don't have the right answer. I'm just seeing how the industry is moving. It's trying to take into account the fact that people have documents their whole businesses uh in consultancy KPMG and a lot of the big uh firms that are purely their whole business is asset generation and that's what they're selling the consultancy that leads to an asset and how are they going to work in the future? How are they going to perform in an agentic future where the asset is maybe an unnecessary um step in the workflow? >> Yeah. Yeah. And and kind of to speak to the industry
in general um more kind of top of mind questions people have uh probably about what we think is going to be impacted. You know that's more conjecture than anything but obviously everyone wants to know. You just mentioned about how some people are g going to have to get comfortable with the AI agents becoming the kind of middle management to what they're doing. And what would you say in general your thoughts are on the impact of what the job market is going to do with the uh advantage of uh or the the advances in AI and the adoption of AI You know, I'm a technology optimist and humanity pessimisted days because I see the vast gap it leaves between u what people right now are skilled for and what the technology can do for us. Um as we are all seeing the recent reports there is
the problem is no longer technology pretty much we have surpassed any um advancement of the technological level but what holds us back to utilize it and materialize it into ROI for whether it's on a company or uh country level it's the adoption and it's that gap between how humans understand uh and deploy that technology to benefit them And unfortunately, I do think that a lot of the current jobs in um big enterprises are just not going to be there. I don't actually believe that there needs to be such big enterprises. I think we're going to see a dissolution in organizations and we're going to see smaller much more agile companies that are very um knowledge specific expert specific and performing in managing these agents and whoever doesn't adapt or is not willing to um become that management level of of the AI is going going
to have a hard time in is going to have a hard time and pretty much any industry whether it's industries whether it's knowledge uh based uh workers I really think that this is such a transformative technology that is going to in impact each and every aspect of our lives by now and it's going to happen really really quickly and there is it can't be compared to anything we experienced before. Um, and the the impact will be so radical. So radical within a few years we're going to have drastic change in the society just due to the nature of that technology getting loose in it. >> Yeah. No, I mean I think that's that's a fair comment and honestly most comments people make about this I'm open to hearing because I just kind of want to hear what the industry is saying because I don't think
any of us are going to be able to predict it too um well. uh you know it's like I'm a optimist too. I think it's going to cause a lot of agile teams. I think you're going to be a lot more businesses probably um a lot less huge businesses, a lot more uh small businesses and opportunities there. So I I am an optimist about it and being the optimist kind of to close the show out. I am curious on a personal and a professional level obviously question uh base aside what are your favorite AI tools that you're using right now and and what do you use them for? That's maybe one of each. >> I just love AI, you know. I recently started using a tool called Gamma AI which produces these assets. Uh because let's be honest like this is we are in the
transition period where there are still um an evaluation for somebody's work or performance. So it's really easy way of me taking a transcription. and I can go to the coffee house and on the way just voice record my my ideas about um the implementation on a client and how we should go about it. what are the complexities we are facing and how to navigate them and then I'm just going to inject that transcription into gamma AI and say hey uh sty it up within our brand we have already created like a custom branding we need and create like a 10 10page uh really quickly to read PDF and it will do it for me and it will be mostly on point like with minimal editing I would have the asset within this 20 30 minute walk and that is a tool that I'm super excited
about because as a as a founder that is uh jack of all trades, master of none. I've always had that barrier of um expressing my ideas to the level that my co-founders could. So one of my co-founders is a designer. He could always visualize the concept that he has and kind of bring everybody on board, get this buy in because it becomes like immediately like very visual, very real um concept. And my other co-founder, he's technical. So, he could always build a prototype. Click here, click there, experience it, see my idea in action. And I was always that marketing background uh founder that might have a good idea but like I'm sitting drawing post-it notes on a whiteboard and whatnot and people are like oh half asleep there. I can now do what my co-founders did. I can build a lovable prototype in the span
of this half an hour. I can visualize it into u landing page report you name it with design tools. I use Figma a lot as well. I love Figma. I feel I felt like this was my first big enablement from being visual thinker to but not designer to being able to visually express my ideas. But now AI plus Figma is like wow like that that is unbelievable. So so it's an incredible freedom of people. I was um I wrote a post about it uh even two years ago. I think it was u I called it the billion dollar next uh knowledge workers who are illiterate because I believe that good ideas are not limited to your level of education, access to resources. Um and the most impactful ideas to change people lives are actually grounded in people who experience the problems that our societies have.
And if you can now give them voice enabled assistant to just converse with and you tap into these billion billion people's ideas who are don't maybe are not even literate as I said like maybe they don't even have the means to communicate their problems and ideas and ideas for solutions in any other way but they can voice record it and from that to turn it into an asset, turn it into a something that is out there and adding impact is getting shorter and shorter. We're going to get there where every single person is going to have in their pocket that tool to um improve their local community and solve problems. And how incredible is that? Like how incredible? very very uh I I I think it's um it's incredible what uh opportunities we have. I mean like even yesterday um or not yesterday sorry two
days ago like we came out now with an agentic browser right with um I don't know if you saw with chat GBT's uh I forget the name of it chat it's called >> um I >> Atlas there we go >> yes exactly >> so I think you know just to close things out I'll say appreciate the passion agree with you completely those are some great things um that you just mentioned and um last question I have for you is uh where can people go to check you guys out >> questionbase.com we do have AI agents hard at work maintaining the website doing SEO articles making improvements so there's a lot of good content on a recurring basis people can go and check it and for more personal touch they can always reach out to me with any questions on LinkedIn >> Awesome well thank you
Jana I really appreciate your time everyone make sure to Go to questionbase.com. That's questionbase.com to check out uh the great tool that we just covered in today's episode. And thank you so much for listening to the podcast. Make sure to leave a like, subscribe, and we'll see you in the next one. Thanks. Bye.