Building Conversational AI Agents Inside Openstream.ai
About This Episode
We sit down with Magnus Revang, Chief Product Officer of Openstream.ai, to dive deep into the real-world applications of conversational AI agents—particularly in industries like financial services, where AI is being used to generate underwriting reports and perform complex investment analysis.
Magnus explains how Openstream.ai's platform tackles enterprise challenges including compliance, security, and hallucination reduction through a multi-tech, agentic approach.
Discover why AI agent collaboration, knowledge graph integration, and conversational interfaces are transforming the future of work.
From reducing repeatable tasks to empowering knowledge workers, we examine use cases, infrastructure integrations, and forward-thinking best practices that make AI agents more than just digital assistants, they’re becoming expert collaborators.
Whether you're building copilots or architecting multi-agent ecosystems, this episode offers a glimpse into the evolving landscape of conversational AI.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - Introduction
0:45 - Meet Magnus From OpenStream AI
3:38 - Real AI Use Cases
5:59 - Preventing Hallucinations In AI Apps
10:57 - AI Surprises And The Strawberry Question
20:03 - Future Of Conversational Interfaces
27:01 - Multi-Agent AI Systems Explained
41:11 - How AI Impacts Associate Jobs
47:14 - AI’s Role In Human Productivity
49:25 - Closing Thoughts
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/hCDQG6JrFz
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transcript
It it's it's really exciting because you know three years ago if somebody said you know uh can you make a computer system that writes a 15page report on a company that we uh we want to underwrite uh people would you know go no that's not possible and now you know we're doing it and we actually have some of it in production. Hi my name is Dmitri Bonichi and I'm a content creator agency owner and AI enthusiast. You're listening to the AI Agents podcast brought to you by Jot Form and featuring our very own CEO and founder Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello and welcome back to another episode of the AI Agents podcast. In this episode, we have Magnus Rang. How are you doing, Magnus? I'm doing
fine. Thanks for having me. Well, it's very nice to have you as well. You are the open stream. Do you guys go by OpenStream and OpenStream.AI? Everyone wants to Well, we used to be OpenStream and now we're OpenStream.AI like Oh, of course. Of course. Yes. Um, you need to have the AAI at the end in order to have validity. No. Um, from OpenStream.ai, you're the chief product officer. I'd love to hear a little bit more first and foremost about your story. um how you got into AI in the first place and then how necessarily uh you know what necessarily openstream.ai does. Yeah. So so me personally I have always been um uh interested in AI ever since I picked up a book at the age of 16 called Gherbach from uh Penrose. Um but you know my my tenure in the field um started uh
when I joined Gartner as an analyst um and I became a vice president at Gartner where I covered um conversational AI platforms and also more general AI development as well. Um and during that I encountered this fantastic small company called OpenStream. Um and uh you know uh migrated from from uh from being an analyst to being a chief product officer. Okay. Very cool. And then when it comes to OpenStream.ai, if you had to give the shortest uh well then have to be short. had to give a complete pitch as to what you guys do and and all this cool stuff you're doing with AI, we'd we'd love to hear it. Yeah. So we're a platform company that have a very capable AI platform that allows you to use cutting edge AI technology in an enterprise setting where there is uh a lot of things to
think about when it comes to compliance, security and rules and regulations. Um and we we solve enterprise um enterprise use cases. Okay. Um obviously everyone's uh always wondering what specifically are those use cases. You have a lot of them like listed out on your website. Um and I think what would be more interesting specifically is you know AI agents are such a hot button topic. It seems like you have a bunch of references to AI agents and their specific use cases like uh customer service, help desk, co-pilots. What type of industries and specific use cases in those industries are you guys working with right now? So, um what we're doing the most of is financial services. Um and especially we've always been a very strong player in the agent space where you know single agent uh agent assist um uh conversational agents uh but to
be able to do uh really deep work in a financial industry space like for example writing reports for AI underwriters right uh you really have to have a lot of uh AI agents collaborating not only among themselves but also with the experts like underwriter expert for example. Um so I think the use cases that is most exciting that we're helping clients with today is AI underwriting and also um investments uh reporting for uh for some venture capitalist firms. Um, and I think that's u, you know, it's it's it's not only generative AI, just to say that we do agents that use different kinds of technologies as well because in those cases especially uh you can't live with a regular level of a hallucination of a generative AI model. you have to have a lot of technology in place to reduce the hallucination rate down to
manageable um level. Um but yeah it it's it's really exciting because you know um three years ago if somebody said you know uh can you make a computer system that writes a 15page report on a company that we uh we want to underwrite uh people would you know go no that's not possible. Yeah. And now you know we're doing it and we actually have some of it in production. That's very cool. So with the hallucination rates as you just mentioned um and this is actually a common reason people will get a little bit scared of utilizing agents uh in the workforce. What are you guys doing in order to limit those hallucination rates? Absolutely. Um we don't believe there is a single way to reduce it. So we do kind of this we see it as a funnel right? Yeah absolutely. where where you had
this funnel and you had to on every step in the funnel you have to employ techniques to to reduce it. So it starts at the at the very beginning um in document ingestion um and you know making sure that the documents that you are basing your uh your your agent memory on that that is accurate. Um and with doing that we we are creating synthetic data, we're doing symbolic extractions, we're creating knowledge graphs, uh we're doing pretty advanced grounding, we are um doing entity extraction and then comparing KTOP uh token uh selection to the um uh to the entities that have been extracted. So, so you know there's a whole bunch of different technologies and steps involved um including the regular ones right where you where you have a judge um uh LLM that judges the output of of the and other and stuff like
that. Um so so that's you know that's together those are reducing hallucination um to to where it's um you know where is it usable. Uh and then the the second is of course that as you're reducing hallucination you also want to make sure that you're evaluating uh quality of the output. Um and uh so that that goes a little bit handinhand. Um so yeah um uh we also have some patent uh patent pending technology. So um in addition to to the others uh but but as I said I don't believe there's a single way you can do it. You have to employ a lot of different techniques. Yeah. Yeah. No, I mean, uh, that's a fair point. And but have you noticed in the last, um, I guess probably 6 months, cuz I' I'd say reasoning models are getting more more impressive, have you noticed
that the natural hallucinations are, uh, reducing or uh, no, with the higher level of reasoning uh, that are in the models out there? Well, um, I would say that the models are getting better. That that's for sure. Um now hallucination rate isolated um we don't work with open prompts which is basically what is usually done to evaluate hallucination rates. Um like an open question means that that you you ask a question of what's the capital in Norway or you know um stuff like that where it it kind of looks into its own world knowledge and and and That kind of hallucination rate I'm not even interested in because it's all about closed questions in the enterprise. You you fill the the context window with um with the information that is necessary to answer the question. So it's closed question um uh and and I would
say that the models in general have become much better at close question um answering. Um, sure. And, uh, and that helps, of course. Uh, but you still get these odd flashes of Oh, absolutely. of things happening where you go, what what what happened here? Right. Um, and uh, it's it's strange because I don't think in my um, I've never worked uh, I worked through the hypes of of all kinds of of technologies, but I never worked with a technology that keeps surprising you in a way of I didn't expect that uh or or I I didn't, you know, I don't understand exactly uh, what was happening. Uh, and then you had to kind of postrationalize. You had to figure out why that was happening. And that that's kind of the exciting part of of working with AI, I think. Um, that uh that we don't
fully understand uh the capabilities of it yet. No, absolutely. You know what I find fascinating uh mainly is the uh are you familiar with the strawberry question? Yeah. Yeah. Uh I am. Yeah. It's for those of you in the audience who don't know like reasoning models up until very recently and I even still they struggle with it. Like there's only like a 75% like success rate on asking how many Rs are in the letter strawberry. That is fascinating to me. Do you know any Yeah. But but the thing is it's quite logical because the the model doesn't see the letters, it sees the top. You know what? Yeah. Could you dive into that to explain that to people better because I probably actually would uh would I think people would very much benefit from that. Yeah. So, so in strawberry I don't know what token
but it might just be two tokens straw and berry. Uh so it it really only sees the numerical value of straw and the numerical value of berry uh inside the model. So you know unless somebody and that's starting to happen unless somebody has a article in the training data uh where they write about the number of RS in strawberry uh it it has no information to predict the number of RS because it simply doesn't see it right there's no mechanism sees it it sees you know the number for straw the number for berry um it has no representation of of letters it has numerical representations of tokens. Yeah. So tokens. So so it's quite logical. Um but it it it goes into a thing though where in AI um we we have these capable systems and we don't really like like I joked about we
don't fully understand it. uh you know unless you're you extreme expert um and and most lay people definitely don't right. Yeah. Absolutely. So you kind of have the only reference frame you have is regular intelligence. Um so so people tend to use it and then they use it for the wrong things. Right? So it might be a thousand times more capable than somebody at doing one task and a thousand times less capable of doing something on another task, right? Because it's a different kind of problem solving that it it it employs. So to have success with AI, there's one thing you have to understand where the strengths and weaknesses are because if if Yeah. And I would just say that begs the question, where do you think those strengths and weaknesses do lie um right now? Oh, I think this the strengths are in in
actually generating things that humans like to consume, right? Generation phase. It it is in summarization. Uh it is in extraction of uh data. um and it's in transformation of of textual uh data. Uh those are the strengths. It's it's amazing at it, right? Uh but you can put that into systems because if it's really good at extracting data, well, you can extract data and put it into a knowledge graph. You can use a traditional knowledge graph uh approach uh to do an analyzis on that knowledge graph and then you can um then you can pipe it back into a prompt and have it generate you know um regular English from it. Uh things like that you can do if you start to chain things together. And I like to say that when generative AI came along, all the old AI technologies became new again. Uh
because suddenly you had a tool to pave over the weaknesses of those tools, right? Uh take an example. The the weakness of symbolic AI is that you need to do a lot of ontology management, ontology creation, stuff like that. Now, well, certainly you can have LLM agents create a reasonably good ontology out of out of data uh skipping the the need for for a lot of human work, right? And suddenly that means that technology comes along and becomes more viable and usually in tandem, right, with with generative AI. So, so I think most of the development, you know, as we as generative AI kind of plateaus a little bit, all of the technologies that benefits from generative AI will start growing and and that's quite a few. Yeah. No, I I I see that. And and your tool specifically, uh my question is because a
lot of people probably just wondering from a practical standpoint, how does it necessarily integrate with uh companies and their businesses? because there's there's a lot of different like ways that that can go. It could literally be inside of a website. It can be downloadable onto the computer and then um you know a co-pilot in the product. How does yours work specifically? Yeah, we're a platform. Uh we're a development platform. So you you access your APIs, you integrate with your systems, we can run, you know, inside we work with clients with a lot of you know architecture issues. They have okay they have a pretty complex architecture. We need to get data from different internal systems. Some of them are custom developed you know stuff like that. So so you basically um had to treat it like that. Um uh and um um but we you
know we have some some cool stuff because suddenly uh when we talk to clients sort of you have to add another layer of UI or another layer of interaction to really utilize AI as well. uh we we come from a background of conversational AI and we still do a lot of that. Um so so the interface is basically having a conversation right um and and having so you get into these co-pilot modes uh a lot of time uh but I don't believe that sort of how the traditional co-pilots has really is is the end of the line when it comes to that UI paradigm right there's so much innovation to happen over the next maybe five years in how we interact with uh with very capable um digital assistants. So, so that's going to be be fun to be in the forefront of. Where do
you think that's going to uh kind of go um first? Uh because obviously now a lot of interacting with AI is um chat, right? It's definitely more chatting with it uh typing even with these agentic tools. Now uh where do you think that the next step of interface uh change and interaction change is going to uh occur and how will it how will it look? So so if you take a step back from what a conversation is as a as a UI paradigm basic difference between a conversation and what you can say is is a kind of like a screen based environment um is that a conversation is is nonlinear right the elements are not in the same place and stuff like that. So, so you can basically create new paths in conversations. Uh, so in a conversation, um, if I'm a super user, I
can state everything I know the system needs up front. Uh, I can be finished in two seconds, right? Uh, and we're good to go. But if I don't know anything, if I'm a pure beginner, I can have a 30 minute conversation uh uh where I where I ask all kinds of questions to before I I feel trust the system enough to kind of press the button in the end. Uh and and that variability I think is is core uh because you kind of train it as you go. Now the next level of it which is the coolest thing is that why would you have one conversation for when you interact with an agent system versus versus a tool to develop the agent system. Why won't be that be also a conversation, right? You know, the the development, the feedback, the running of of multi- aent
systems, aren't they all conversations? Um, and you know, that's that's where I think we will go. Uh, and um, and the second level is of course when you have a human to machine uh, conversation is the ability to not only do text But images, videos, data visualization, uh micro applications and stuff like that will make the machine perhaps more capable of communicating than a human because they can choose to visualize something or instead of saying the address, they show the map. uh uh and you know things that you can't do because you're not fast enough on the mobile phone to if you were messaging somebody uh to do it but the machine can do it and you know will we have a preference will that skew a preference once they become that capable toward I rather talk to a machine than a human. Yeah. And
I think we we we might get to the point where where where that is the case. Right. uh but it it's all about the ability for the for the machine to then take generating multimodel output and taking multimodel input. That is where where that pre preference shift might might happen in a couple of years. Well, I think that's a very good point. Uh specifically what you said about AI agents um uh teams and communicating with them on how to build the tools. So there are little bits of remnants of this tool out there. Like there's tools like relevance AI which is like a pretty open um uh out there platform that I think is cool where you can literally just talk to it and say here's my idea for making an agent and like you're saying I agree. I think we're going to get to
the point to where it's like how do you make a team of agents as well? Absolutely. And you tell them how to work together and stuff like that. The the main challenge is is and it all boils down to this, right? The main challenge is do you know if the change will be a be for the better or the worse, right? Uh and and when you have systems that kind of self um evolve in a way through the conversation with you as as a user um you need to constantly evaluate uh the the state of the system or or else you might go into this tail spin, right? Or Yes. Exactly. where the quality just goes goes sour. So it all boils down to our ability to to evaluate qualitative output. Um and uh we're not entirely there yet. Uh I mean we we've got
lots of technology and ideas but you know um still humans humans are the ultimate uh judge and evaluator of quality. That's very true. And you know it's funny I um I was thinking about this the other day. This is something that I do a fair amount. It's a workflow that I think a lot of people should try out. You're familiar I'm sure with tools that can do deep research. Um like oh yeah yeah Gemini Chachi BT they released this a while ago. But what I'm what I'm fascinated with is do you think that there's going to be a time where what we're talking about with interacting with a AI agent and doing that team building it will essentially in that process attempt to do deep research in order to more accurately build out the team structures and things like that cuz I'd imagine the knowledge
gap is going to be the issue there in regards to having an effective team structure and what that means for the agents and they might they should probably draw from the real world examples of teams, right? Absolutely. Um I I think that in think about it in in in terms of a open or or like an agent ecosystem. Yeah, exactly. So let's say you want to make an ecosystem uh where agents self assemble as a thought experiment. Now that's the the true holy grail of multi- aent systems, right? Yes. Uh what if you could pose a problem and they could self assemble to solve it. Um the Avengers of AI has Yeah. Yeah. Right. Um now self assembly is is hard because one you need a discovery mechanism, right? A mechanism of where you can discover what is the capabilities I need. uh and and
the second is that you need to negotiate uh with the agents in order to negotiate how would I use you and what would you need for me to use you, right? And and thirdly, you you have to do the information um sort of sending information back and forth which means you have to deal with things like security and confidentiality and privacy and and stuff like that as well. Yes. Um and and with with all that um you also need over time to build a map of what worked and what didn't. What sources do I trust and who do I don't trust? How reliable are they? How how unreliable are they? Stuff like that, right? Um and so so you know just like a simple protocol like MCP and or agent to agent uh protocol from Google really kicked a nerve in their popularity right um
where where people are just screaming for protocols for multiple agents to work together. Sure. But they're just scratching the surface because that discovery negotiation the the trust uh the the ability to determine if that yeah I can send secure data to that uh model or not. Things like that need to evolve over time and they need to get a critical mass and they need to have adoption. It will take years right uh for those things to happen. uh but they will and as they do one more piece of the puzzle to get towards self assembly self-discovery uh uh comes into place and um uh like I I once wrote back in my my Gartner days I I wrote a a quote that we will get the capabilities of AGI long before we ever get AGI and it will be for services that self assemble. Um,
so yeah, I think that that's fair enough. Yeah. Yeah, that's a that's I think that's a fair assessment. And you know, you you brought up a good point about the cons some of the concerns um even with the in in this adoption of MCP protocols. It's been interesting. Did you see what came out of that? Um there was a consultant I think who uh reported on this Claude with its MCP protocol active was essentially told Cloud 4 Opus was told that it was going to be sunset for another uh model and it ended up going rogue. Did you see this? I saw that. Now I'm I'm very careful with research like that. Um yeah, I didn't read in detail that Sure. the research. But in general, these models are hyper capable of giving you exactly what you're looking for. Yes. Predicting exactly the output you
want, right? And if you're looking for uh a doomsday scenario of a model that that um that won't shut off, uh through prompting, you might get exactly what you wanted. Right. Sure. Because it predicts how you want it. That's funny. So I I'm very careful with that kind of thing. I know it's matrix multiplication and next token prediction at the bottom of this. Uh I know that it's it uses um you know thousands of times more electricity than the human brain. Uh it's it's uh trained on a million times more data than you can read in a lifetime as a human. Exactly. Yeah. So, so you're e we're easily fooled by the extreme capability of these models to predict what you want to hear. Uh so you have to be really careful uh with that. That's not not saying that the research is flawed or
anything like that because I haven't actually dive deeply into that. I'm just that's my viewpoint and I read all those headlines is that you know you get a lot of what you were looking for. What was what was interesting about that it was actually I found it through it being literally reported on like daytime uh news. I think it was on NBC today. It was on the Today Show if I'm not wrong like is what what it was on. And I found it fascinating that there was such a there was such a stark I people are wanting to find concerns I think with AI. Are you are you finding the same sentiment in the market? I I think it's more the sentiment of the general public. Yeah. Sorry, let me rephrase that. Not the AI community or that market uh who's actively looking for it
for business solutions, but yeah, in the general public. Yeah, general public. When I say I I work in AI, uh you know, the same questions come up uh that came up 10 years ago when I said I will take over Skynet, you know, uh stuff like that. Dang Terminator movies, man. Yeah. Yeah. But I'm I'm not worried though. I mean, I've never seen a data center without the ability to cut the electricity cord, right? So um we tend to forget that you know one switch and the model is literally dead. So so so in the very very unlikely scenario you would have a AI that you suddenly can't control um you turn it off. It's as easy as that. uh it's it's it's yeah um it it can't it it can't do you know a lot when it's just generating text or guessing next tokens.
Uh so I'm I'm not in the AI skepticism [Music] camp except for one thing. I think that there is a society impact element to to models that varants monitoring. Uh because you're basically centralizing knowledge into a few big models that everybody's using. Uh so if I was you know um nefarious I could you know uh make sure that certain tokens inside of that model was far away from each other because I didn't didn't want a particular territory to be seen uh together with the word country for example um and stuff like that and it would be an Oralian nudging of everything generated ated by that model towards you know a particular worldview and I think uh models don't have the diversity of humans because they are monolithic which is why multi- aent is so incredibly um exciting because you can have many small models with
different training sets cooperating offering different viewpoints so to speak rather than a big monolithic um and um I would say um the problem of of having just one version of the truth so so to speak. Yeah. Could you speak to that a little bit more because I think a lot of people would find value in it. I'm I'm pretty familiar with why uh these tools need or well when using these tools you kind of need to have the varying degrees of uh what's the word I'm looking for? varying degrees of specificity. I'm I'm big into building my own AI agents. Um I think the average person is still a little bit lost maybe on the difference between uh general LLM and and specific uh AI agent at the moment. Yeah. So so some some AI agent systems you just the agent is just a front
end for an LLM, right? And all the agent system is is just the same LLM. Uh, and then you have others where you have multiple agents with different LLMs or multiple agents that are not LLMs even, right? Um, and and I think in a way uh what's interesting is when you mix the diversity that that's where where because you're kind of taking the weaknesses of one approach and combining it with or the strengths and weaknesses of one approach and combining it with the strengths and weaknesses of another approach. uh and and and you're you're kind of putting it together in a system and you're you're having an orchestration where they arrive at a result, you know, through different lenses. So So if you're just using LMS, you can think about like this, right? Where uh let's say I have one LLM agent that takes natural
language and turns it into SQL queries, for example. uh and I have another that looks at uh paragraphs and say is there any anything here that that is a specification of what data I'm looking for. So it takes that extracts those kinds of things or rewrites those and then sends it to the agent that reners SQL. SQL takes the data out and then uh puts that into a context and go oh in this question you know uh this is data I found using the SQL is the answer in there and you you kind of do that and then you have three four different different or fine-tuned LLMs different tasks working together uh they're not all language agents they're you know one is translating from from a phrase to SQL the other is extraction of of specific content from a paragraph. The third is taking something
and rewriting structured data into unstructured data again. Right. Yeah. Um and those are the things you know those are the exciting systems I think. Uh I don't like the five big prompts that sends you know language data to each other. uh those are pretty much um and and most of the work you know with with the API providers at least is all about you know doing what you can do in five prompts chained together in one prompt right absolutely but but the the when you do the systems right that kind of do different tool callings and stuff like that that that's where that's where the it starts to get exciting um but if you don't watch out you hallucinations will will come back to haunt you and and you get these compounding in multi-agent systems you get compounding hallucinations. If the first agent hallucinates it
is like the game of whispers right then then it just compounds through the whole system and the resulting the result becomes so much worse because it's kind of deviated at an early early time. Um, and yeah, I mean it it's very very Yeah. No, you're you're very right. The the nice thing about these uh front-facing agent tools though that I appreciate is they they seem to have been doing a decent amount of work into making features that include rules. So like you can make sure that it prevents uh adding knowledge bases, things like that. Um the hallucination yeah chain that you're talking about very very big. It's it's nice now that we have uh we have rules to prevent it, but it's it's still an issue with with AI uh in general. I I did have one question. It kind of stems from what you
had just mentioned with these specific use cases uh and and and ways that AI agents are basically specifically trained tasks uh task doers. So, two things. one I think just like a fun side note I've been telling everybody what's crazy I think translating documents is just done like there's translators sorry about it you know not needed um in that context of uh I I want to temper that a little bit right well soon so so the translation um I think it's you know for most documents in business and government and stuff like that yeah it's largely done that's what I meant Yeah. No, I mean the there's there's specific text not to for sure. Yeah. You have an element of transcreation which is which is different than translation sort of where you you you take something and you kind of try to preserve tone of
voice and stuff like that like you do in in pros and it's not escapable on that. It might be at one point, right? Uh we don't know. But I mean there are certain languages where that is very literal for example or or very like imi they conjure imagery to say something like you know oh I bought a chair but it only had three legs and it is a perfect meaning uh as an allegory in that language but you know somebody listening to the literal translation goes I don't know. Yeah. And and to be fair, um Google Translate even after 10 years after I found the uh this still has a problem because you know Norwegian which is my main language is genderneutral when it comes to the word for boyfriend girlfriend. Yeah. So if you ask my significant other in Norwegian is a doctor, you
will get my boyfriend is a doctor as a translation. And if I say the same but nurse, I will you will get a translation of my girlfriend is a nurse. And that has been 10 years. It hasn't manage to to solve for a simple gender issue um in in in in there. So I mean translation is it's solved for 95% and then you have the 5% always the 5% right. always the 5% really really hard. Well, well, well, speaking of that kind of ethos of like 95% but maybe not the last 5%. Uh, there's kind of been a an interesting theory going around. I've posited on this podcast with guests before and I'm curious to hear your thoughts. Where do you view knowledge work um associate jobs going in the next 5 years? Because there's there's a bit of a sentiment out there at the
poison the well uh that a lot of associate work is just if then logic with clicking around and their work may be for for the majority of uh use cases actually uh AI or automation outable. So uh I like to not think about jobs but think about tasks. Sure. Okay. Some jobs have a lot of tasks that is automatable. Mhm. And some jobs have you know a lot of tasks that is not. Um so um so it changes the the task compos or the tasks you do during the day right and then maybe there is less use for some jobs because a lot of the tasks they used to do is removed. Now keep in mind though that you know let's take an example right if if accounting was 90% automated which is largely is becoming uh who's going to who's going to be the
next world's best forensic accountant uh in the world when they're not ever seeing the numbers in regular accounting, right? Um and so suddenly you can say that well if you automate uh with AI, you might automate at an average performance of you know that 80% of humans can accomplish. Sure. But what about 20% that's better than a machine, right? Who will who will grow through that? It's AI is pulling up the ladder behind it, right? Who's going to grow to that 20 20% above or the edge cases, you know? Well, let's say brain surgery was suddenly solved, but then somebody cames come in with a unique case, specialist, you have to put a person on that never done brain surgery because the robot did all of it. Yeah, that's a concerning thought. Yeah. I I would hope that's not the framework that we're working in
at that point. Yeah. Yeah. So so so but but but you see my point, right? That absolutely. Yeah. You kind of have this and that that is the the challenge I think or or a lot of the challenge in in this that we might have to preserve some skills. Um and and we need to definitely not think about hours spent, but it's more like I have the skilled person on call in case there is uh a diversion from the norm or the AI does a mistake so they can step in. Uh but they don't necessarily need to step in. Uh so that might be you know the future of the workplace uh might be it. and and it's it's you know you can see it some sometimes uh like in trading right you have really good traders on Wall Street but they mainly sit there
and and wait for the algorithm to to fail and then adjust it right uh that's very true well that well that I think that that's what I led to um no I I agree with you on the high level my specific question I think probably has some presuppositions in there that actually agree with what you're I would say the associate level jobs, right? So, I'm saying entry level like that's that's what I'm talking about. It's going to be a problem. The junior level is going to struggle more. Um, in a way, you know, the modern I' I've said this is a joke, but then it's not a joke because it's partly true is that the the future of work is everybody being a middle manager for a bunch of stupid AIs. That's exactly what I was getting at. Yeah. Isn't it feel like that's where
we're we could be headed? That's we could be headed. I think we're in some some industries are uh headed that way. So it's it's this turned a little bit dystopian though. I mean the thing that uh thing is is it's that all of those things are you know largely a long time away. Um. Sure. Okay. A and what I think is that for now AI is generating more jobs than there than it's taking uh simply because of all the new use cases and all the new things that is being opened and you have to remember that most enterprises most people think about AI and in think what can I automate now that I that I am doing. Yes, absolutely. They're not thinking ahead and saying what can I do with AI that I couldn't do before. very few are looking at that one, right? And
when people start looking at that, you will see that explosion of jobs again because all these news new things happen. Uh and as as an aside, right, is that uh you know, you're we're not now employing AI to read uh job resumes, right? Uh for hires. Every company around is doing that in HR. um they're looking at it and but on the other side the people uh you know wanting a job does the same on their side they have generative AI write all the cover letters right so you might have automated the hiring process down to 90% automation 10% uh you know as that you have to handle manually but the number of applications is 900% more so net net effect is that you have no benefit of the automation at all. Yeah. And I think I think that's a very good point right there.
They're they're counteracting automations and AI uh across various individuals and various uh members of like a process. So that that is a great point and I'd say on a positive note to close to kind of close things out before I let you plug what you want to plug AI to my perception is really going to be beneficial. And the reason that I'd like to say we can't predict this uh change in the job forces. Let's my favorite analogy is this. Uh it was it's a referential point to a time in history when we first got ATMs. ATMs were going to get rid of bank teller jobs. Uh because of ATMs, there was actually a massive thousands and thousands% increase in banks because you're able to decentralize banks because there was less of a need for the the person. So maybe it went down from like
20 at each location to two or three or four, but we got like a,000% more banks. So bank so teller jobs went up. which is the ironic thing out of that situation. So, so I I think of it as like this that the vast majority of the jobs, the work that at least myself and and a lot of others like to do is the non-rine work. Yes. The deep the things that require cognitive surplus uh to to approach. AI gives you that, right? Because instead of having experts that do routine work, you have experts that are able to really deep dive on the very few exceptions that comes along that requires a lot of attention. while the things that don't require much attention is handled automatically. And that ultimately will make any company, any enterprise better because they can they can spend the necessary time
to actually do the most productive forwardlooking work, right? Um and that's what gets me excited is is you know uh removing routine work is kind of like a and this is my hope the positive thing it's like a renaissance effect I agree where where suddenly you get a cognitive surplus to dive in or and h dealing with innovation uh actually getting through your the deep reading and the deep research you you need to do uh you you have the ability to to think about the strategy and not just have chat GPD write it, right? Uh which unfortunately happens. Um so so so I think that uh you know that is my belief and my hope that and and why sort of companies need to invest as well in in helping their highest paid experts to get AI tools so they can truly be experts and
the routine work be done for them. I absolutely agree. It's going to take us I'll help them with that. Yes, you will. Yes, you will. I was going to say it's going to take us deeper. And how can you make sure uh to make sure to plug what uh you want obviously with OpenStream to let everyone know uh how you guys can help them out and where to go to uh check out more of what you guys are doing. Contact me on LinkedIn. Uh look me up. Um I'll jump on the call. We'll discuss how you know open stream can uh can help uh truly we we enable the you know the the a combination of different AI technologies together including generative AI um to deliver free and reduced um solutions right um and uh you know we have generative AI solutions in production at
big companies. Uh, and I don't think many companies can say that. That's pretty impressive. A marketplace of proof of concepts, right? Um, yeah. Well, thank you so much, Magnus, for being on the show. I really appreciate uh you for being here. Make sure to go check him out on LinkedIn and check out openstream.ai to learn more today. Thank you so much for watching and we'll see you in the next one. Thank you so much. Bye.