Home All Episodes About Official Page Subscribe on YouTube
Episode 63 Jul 31, 2025 55:15 3.6K views

Building AI Agents That Learn - Insights from Elman Mansimov, Founder of Sourcely

About This Episode

In this episode of the AI Agents Podcast, we dive deep into the evolving world of AI agents with Elman Mansimov, founder of Sourcely.

Elman shares how his background in generative AI research—from early text-to-image models to pioneering work in machine learning at the University of Toronto and NYU—has shaped his vision for building intelligent agents that can truly learn, adapt, and reason.

He discusses the challenges and breakthroughs involved in developing Sourcely, an AI-powered academic tool designed for deep search and source verification across diverse fields from fitness to academia.

Explore how Elman sees the future of autonomous agents transforming everything from academic research to software development, and why context length, reasoning capabilities, and user personalization are key to building smarter AI tools.

Whether you're an AI enthusiast or a product builder, this thoughtful conversation reveals what it really takes to build agents that learn and deliver value in meaningful, nuanced ways.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⏰ TIMESTAMPS:
0:00 - AI Tools Boost Productivity
0:59 - Introducing Sourcely and Elman
2:32 - AI in Academia and Industry
4:57 - Expanding Sourcely’s Use Cases
7:01 - Early Days in AI and Machine Learning
10:59 - AI Pop Culture and Self-Driving Cars
14:27 - From AI Research to API Usability
17:45 - Building with AI as a Non-Coder
20:38 - Growing Competition in AI Products
26:32 - Impact of AI Hype and Online Gurus
31:31 - Improving Accuracy in AI Search Tools
45:01 - Leveraging Longer Context Windows
50:15 - Making Sourcely More Accessible
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Sign up for free ➡️ https://link.jotform.com/a1Amoh9XGg
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Follow us on:
Twitter ➡️ https://x.com/aiagentspodcast
Instagram ➡️ https://www.instagram.com/aiagentspodcast
TikTok ➡️ https://www.tiktok.com/@aiagentspodcast
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Transcript

like even as a person who is technical and who's been in computer science myself like I'm much more productive through these tools and I get a lot more done and uh it it also is very fun you know because I feel now that I can pretty much learn anything or figure out anything with AI. I feel like overall with these AI tools there's a it it made it more fun to learn new things and then I feel like everyone is supercharged to the point that even if they can figure it out I feel like there is that desire to figure it out and uh and then move forward. >> Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner and AI enthusiast. You're listening to the AI agents podcast brought to you by Jot Form and featuring our very own CEO and

founder Idk and Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show. Hello everyone, my name is Dimmitri and in this episode we have Elman from Sourcely. How you doing Elman? >> Good, good. What about you? >> I'm living the dream. Um, very nice to have you on the podcast. Uh, you're going to be talking today about your product Sourcely. So, first and foremost, tell us a little bit about who you are. Um, how you got to where you're at with Sourcely. um just kind of your whole leadup to it and and what it is as a product. I always love hearing uh the origin stories of everybody who comes on the podcast. >> Yeah. Uh definitely. So uh my name is Elman and actually I've been in the AI field for 10

years for a little bit more than 10 years and started doing research at you know university of Toronto building the very first generative model. So it was uh built the very first text to image model back then it was like the images generated were very tiny like 32 by 32 like you squint and you kind of see the promise of the technology uh like very far >> from where it is now >> but kind of going through the whole evolution and the cycle of AI. I did a PhD at New York University at you know been in academia and been in AI at the same time and you know starting from you know around chpt time like we realize that like you know now we're at the time where you know we're not only making an impact as you know by publishing papers but also

through the products as well and you know as just me coming from the academic background I realized that there is an opportunity, right? There is an opportunity to uh supercharge like pretty much every professions and academics uh being one of them and you know came came through sourcely and uh you know ended up uh being an acquisition. So it was originally built by folks in UK just to help them just to help them you know uh help themselves. So they were like a dental students in UK. they were like okay we want to help ourselves just find the sources last minute uh for our essays and for our papers and but then once kind of like it got acquired and working on the product I realized that actually there's a a bigger a much bigger market and much bigger use cases for that not just

like for let's say students or academics who are writing this for example uh people who are in the fitness industry or let's say nutrition nutritionists let's say they hear the statement Like you know if you exercise certain way or if you eat diet certain way you let's say you'll get stronger or let's say you will uh lose weight or you increase your mass muscle mass faster. But whenever they hear that they need to know where it's coming from as well as they need to know the evidences of where it's coming from. And you know like of course there are like things like chatd and perplexity that focus a lot more on the web- based sources but at the same time they need kind of like for more power users there is a need to go and dig deeper into the let's say more academic sources

kind of have control of like let's say the year the type of journals it's searching and then kind of like what I'm excited for like part of being the agent is now you're not only kind surfacing the highle information about the sources but you can actually go deeper into sources read them properly rank them and then kind of summarize in the bullet points like okay this source has that information like for example the authors are very famous I did the research on them and uh okay >> there's like follow-up works and you should pay attention to that whenever you try to verify whether that statement is true or not >> interesting okay so obviously it seems like academia was kind of a big um thing in your life and that maybe was kind of part of the spark here like you actually I'm curious how

did you end up going from Toronto um and then have the interest to go to NYU how did that happen? Yeah. So, uh I was just applying like you know at the time uh at University of uh Toronto I was thinking like okay at the very end >> whether what I should do next right and you know University of Toronto had a very strong machine learning lab right like Jeff Hinton and his students were there and you know I got an opportunity to as part of my course load to do a research in machine learning with uh like you know one of the authors of like the biggest and most influential papers in machine learning like dropout and so on. And you know when I was graduating I was like well I want to continue doing that. So I took a gap year, continued uh

working like doing research at University of Toronto, applied to different schools and you know NYU was one of the schools like New York University was one of the schools that I got accepted and most importantly just to work with my supervisor uh like Cho and then the lab uh there as well. So I felt like it was a great opportunity. lots of leaders as well in AI like you know machine learning deep learning were at New York University and felt like a great city and a great place to continue my work in AI >> and what year did uh that move to New York happen you said I think before we talked on the call 2016 okay so it's kind of interesting that's pretty like to to the world of us you know it's kind of early in regards to AI like we hadn't really

seen >> generative AI be popular until late 23 early 20 or late 22 or 23, I always forget what year it was, but um yeah, that was that was pretty early on. So, when did you start, I guess, >> seeing this transition towards um >> and what you're doing on a day-to-day uh like an interest of AI in general, you know, like what drew you to AI because everyone has their own story and mine's like with the hype train recently, but you were early into it. So how did you get to it into it early? So I did my undergrad right at the University of Toronto in computer science and yeah like what was evident to me like or at least my feeling obviously it's like a hindsight but my feeling was that like >> well the way we typically program computers is through algorithms

meaning that let's say in your head like you have a problem and then in your head you're like okay uh we're going to have certain solution we're going to program it with a certain like rules and so that the computer can execute the same program every time. But for me like you know kind of seeing the progress of AI at the time and what happened was for example at just at that time the image recognition like image net competition just started to work like deep neural nets started showing that okay there is a big improvement learning through machine learning on recognizing the images and I realized that like in general like computer science a lot of the problems in computer science cannot be solved through just hard coding the algorithms. So like just making the algorithms for example right if you take the image classification

as an example there are millions of ways let's say dogs and cats are different from each other >> in different lighting like you know ears you know and so on. So the natural way to do it is through learning. So you feed the data into the classifiers right and uh kind of you learn you learn the patterns and you learn how to do that. So and then of course kind of like deep neural nets made a lot of sense because they were like nonlinear they kind of learn the abstractions and that hierarchy of these features and then uh felt like a natural extension of like machine learning and AI and you know you mentioned that let's say of course at that time uh a lot of focus were were more on the discriminative models like as you said like classifiers you know sentiment classifiers and

so on people weren't talking about generative of AI But you know there were people who were pushing the envelope of like what they call journey of AI. Like for example uh >> my adviser he uh work on machine translation and he was like a inventor of the attention mechanism in machine translation and >> even at the time like 201617 Google were deploying transformer or LSTM based models like recurren neural net models for Google translate. So you would go to Google translate you would type a sentence and it was like the early kind of innings or like the early let's say grandfathers of chat GPT like models already deployed in production but of course it wasn't as prevalent right the applications were much more limited and the quality wasn't the same as well right and then of course as we keep going along like the hardware

improves we're discovering more and more tricks on how to improve the quality of the generations and then with pre-training and then that scale like we start putting a lot more data and information about these models kind of making them accessible to everyone rather than for example limited tasks built by academics. >> I find it really interesting how um the culture and I guess even social commentary kind of aligns with times. So like have you ever seen the show Silicon Valley? I've seen the first or second season and second season a long long time ago, but it's been so long. >> Okay. It's been a while. So, there's a character who ended up being more probably prevalent in the third season and there's just like a episode of him getting an idea. His name was Jin Yang and he got an idea for a image classifier.

Well, he didn't actually get it. It was forced on him. Do you remember the hot dog thing? >> Yeah. Yeah. Yeah. Hot dog. It was like wearing glass with a seafood. >> Yeah. Yeah. You see you see food and it it's like Spotify but or not stop uh it's like Shazam but for food. Um it's so funny cuz I think around the timeline you were saying is when that like season of uh of Silicon Valley came out. So that's what immediately came to mind with image classification and how far it's gone. Um but okay >> it was big at the time. Yeah. people self-driving cars because people were realizing that okay we are now getting much better at understanding images and then it's how like the whole evolution of self-driving cars started like 161 17 with Uber and like all this mini hype cycle back

then yeah what's mindboggling to me is you know we have companies now that I mean you take a look at it and we use chesm on a daily basis. Say for example, you're someone like me or whoever and you're just using HBT, use 03 now. You can send them a you can send 03 a picture and it could figure out where on the planet you are. So we've we've gone quite a ways um improved. So it's it's just fascinating to me. >> So >> moving forward into what you're Yeah. Exactly. Yeah. And and my question then would be as somebody who was kind of in the know about it earlier, how from your perspective did you see that transition into AI being at the forefront of everything, you know, like how how was that for you when when you were experiencing it, working in it

more commonly than others and then it became in the in the forefront of the world? Like what was that like for you? I think it was a big change and I think that it's like a pretty big change even happening within AI let's say people who work in AI right like I would say like publishing papers or like working a lot on the techniques I think that um what people started realizing or people started doing is actually exposing these tools more for everyone right like for example GPT2 and GPT3 like the early kind of API access >> like people started you know um dem democratizing these tools like or those AI models not just for people who let's say have GPUs or like just like know how to let's say run PyTorch or like run all these or like train all these models but rather

than as an API endpoint where anyone in the world no matter what your background is as long as you can code you can just quickly send something to that API Okay. And then you have you can do something and I feel like that's where the shift started with the very big shift started where you actually don't need to be an AI expert like understanding the technology inside of it like transformer architectures you know statistics math but rather than you can be very creative and uh utilize it to the max which I think it's a skill of its own like and it's not an easy skill like people kind of make fun like oh you just prompt engineered something to make it work But I think there's a lot of nuance to that on how to extract a lot of value from the models and ever

I think since then like you know people I feel like copyrightiting was one of the first applications in like 2122 where like people were like okay let's just improve a copy or like slightly improve the sentence and then you know the chat GPT I think that was like a big big one where it's like you know just a common interface like a conversational interface for everyone to use and I I think ever since then I think personally for me agents was also a big big change because >> uh the fact that it can kind of autonomously or let's say semi-autonomously you know have a loop where it's like okay I'm going to try >> interactively >> interactively solve the task I was like wow that that's that's kind of even like as a person who's been in the field it was pleasantly surprising to me

that AI can figure it out and have a plan do something repeat until certain conclusion is reached. >> Well, what are your thoughts on um as you've seen that improvement towards agents, right? Like this this goes a lot of different ways. It's not only in generative content, it's also in like making products. Um like there's tools out there like you obviously acquired something, but people are out here building these different workflows and even products on things like lovable.dev and whatnot. And there's like code co-pilots now. What are your thoughts on how that's helped you or it's going to help the industry in general um to kind of bring more products um to availability? Because like I'm not a coder, but I've been able to throw something together with lovable um that actually works cuz I have like automation and like API knowledge and I can

just prompt to tell it to kind of map in the ways that I'd like it to. You know what I mean? Um what are your thoughts on how that's going to impact things? I would start with me personally and like you know using these AI tools for coding started for me with a copilot. I don't know if you remember in 2022 or 2021 like uh Microsoft with OpenAI released copilot which was like the autocomplete autocomplete for the editor >> and uh personally personally like you know I would code in Python like I was pretty familiar and comfortable in Python but I didn't know at all about web technology like know like uh how to build in let's say react like JavaScript or like you know just just doing this felt like a different world to me that I could never tackle. But with Copilot, I

could feel like I'm a little bit supercharged. You know, I'm getting something. I could like comment, comment, comment. I'll put a lot of comments around, try to trigger that copilot uh to do the thing for me. And of course, uh like you know, you go to chat GPT like you know, you start coding with chat GPT in a separate window and kind of try to describe the issue to get it resolved. And like I I for me it was a major unlock like even as a person who is technical and who's been in computer science myself like I'm much more productive through these tools and I get a lot more done and uh it it also is very fun you know because I feel now I can pretty much learn anything or figure out anything like you know with AI and even without AI like

even something is kind of figure I can figure like maybe I can call some friends or like try to like hire or like get someone to help me but I feel like overall with these AI tools there's a it it made it more fun to learn new things and then I feel like everyone is supercharged to the point that even if it can figure it out I feel like there is that desire to figure it out and uh and then move forward and then you know as you know like you mentioned these tools like you know lovable and uh you know cursor and so on. I think that they're very fun to use. I I try actually feel like even sometimes behind them like these days I'm playing more with cloud code and like uh try to see like with the background agents as well

like try to see that okay rather than me sitting in front of a computer coding it >> if I can let's say create certain issues like GitHub issues or tasks it would go do it in the background >> but then >> come back to me and then not just write a code but then even hopefully present to me how it looks like. So I can choose and pick say like, okay, this is a good change. It looks visually appealing. Let's just keep moving that direction. So yeah, it's evolving at a crazy speeds. I'm pretty happy about it. >> Okay. So my my question then is obviously you're um how long have you been doing what you're doing with Source again? Uh it's about like >> two years. >> Two years. Okay. And you've kind of then seen probably the landscape adjust maybe get more competitive.

That's my question. Has the landscape kind of gotten more competitive and more fastm moving like in the time that you've basically been working on this product? >> Yeah, I mean there's a lot of competition now right and like for example with source right uh for the search right you know academic search or like finding the right sources there are competitions like both products like chat GPT or like generic products for like chat bots with sources. this is one, but also there are specialized tools like this that kind of help people narrow down on the academic sources and then give evidence on that. So yeah, there's definitely a lot of competition. Uh but I guess what personally helps me is just being motivated by the problem itself and uh rather than just pay attention to the competition but pay attention to how people use it and

kind of getting a feel towards the unique advantages of it. So for example the unique advantage of sourcely is uh comp like all the other let's say chatbot or type of like search engines for academic search engine they focus more on the short queries like you would type in maybe a sentence let's say for example how does GLP1 affect the muscle mass and that's it >> versus uh sourcely advantage is that you can copy paste arbitrary text arbitrary length text it would go and find sources for that >> like what what do you mean by arbitrary Like you can copy paste let's say five pages of your text. >> Okay. Okay. I was like >> you can pay like arbitrary length text. You can copy as much text as you want. It will go find these sources but then also it will select parts of

the text that need evidences like for example would help for people who are maybe writing grants like you know or like uh uh writing papers that they have like a chunk of text written and then they need to figure out where to add these sources and what to add there. So I think that on a high level what what I was kind of trying to allude to and I think that the same thing with the uh agents for coding right on a high level if you zoom out they kind of seem similar to each other and it feels like you know everyone is just competing with each other but once you zoom in and start using it you realize that there's a lot of nuance to them that makes them different and makes them stand out more. And in that way that's kind of like

how you get your customer right some person maybe wants that type of flavor of the product they go they use it they pay for it and uh you know maybe others go to another one and then you can interswitch right like you can cancel then you go to another one you use it then you come back and then you use that as well so that's how I personally think about it but yeah of course the competition is there like you know AI now is probably number one talk technology thing in the world so yeah definitely there there's a lot of competition and everyone is trying to move as fast as possible and uh yeah just get ahead. What recommendations would you give to somebody who's like in a new position um building out a product like yours um well not in the same industry obviously

but building out in general something like you're doing um to scope out like what interests are out there for um the taking right because obviously people just go and say oh I think I could make an app like this but how do you kind of like verify that there's like even a need for what you're you're making >> you know I think that with uh like testing the demand, right? It's always tricky because like you know there's always like conventional advice like you know go try to talk to the customers or like you know some people say oh like you know just ship MVP as fast as possible and like uh you know you will know the demand uh like you know if you ship MVP as fast as possible let's say one week then you immediately show it to users and uh you know

you will know whether they will like to pay for it or not. But honestly, I think at the end of the day, I I think that the best advice is just do what you think is right like or what you think is like the right thing for you to do. of course like uh go and like if you have an idea like go try to use one of these tools like agent tools like coding tools to get it off the ground and but then I think once you start building it you start realizing that there is a lot of nuance to it and even if you launch the first version you will realize that there's a lot of nuance on how people want to use it or like unexpected ways on how to do it. So yeah, even though I think that people claim like

you know you go on Twitter and people claim oh in uh 24 hours I can replicate notion or like I can replicate some like massive uh software like vendor or SAS solution but then you realize that even if you kind of on the surface level replicated them there's a lot of nuance that goes into how these things work and uh how people use it and that's what the nuance ends up mattering and that nuance I think can be only acquired with time and no one can simulate time faster than this, right? Like everyone is running on the same clock. So the only way to do it is just to kind of uh running with time and over time you will kind of figure out and keep doing it and stand out stand out in your own way. >> Yeah, I want you to touch on

that a little bit more. I actually um as somebody who's in the space interviewing a lot of people like you trying to mess around with making agents and stuff like that, I I I have I have some comments and some questions as well. Do you see this as like a big um kind of a big issue or do you get I don't want to say offended but do you get a little bit irritated when you see these types of posts because for me I almost feel like it's like misinformation or like the equivalent of our gold rush right is the equivalent of of the fake guru snake oil salesman from like the 200 like teens that were selling like online marketing courses. And I I do I do have some concerns when I see those types of posts that the same the same like uh

vices of like 10 years ago are coming back where people are like look I did this in x amount of period of time you can too buy my course and I go I'm technically knowledgeable the people who are probably looking at this that you're trying to attract or not I know you can't make a hundred apps and lovable that function from a backend frontend standpoint in a week. So that that these are my concerns. What are your what are your thoughts on it because that that's where I kind of wanted to bring the conversation to a little bit. >> I mean I think the what I there are like two ways of thinking about it, right? One way is like okay uh from the marketer perspective and the person who like a user or like listening to them as a marketer perspective I think you

will learn a lot about the hooks like clearly I think the reason why they're doing that is they want a very strong hook to attract an attention and obviously if you say something very strong statement like look I just uh made whatever 100 million like software that makes 100 million a year in 24 hours people will be like oh they'll be paying attention to that because it feels like crazy. How can anyone do this? So, it's like on one hand it's like a hook and the reason when they're doing it like you know as a as a person who's been doing marketing you know for sourcing and like other products you realize that >> it's a hook like you need to have some kind of strong thing in the hook otherwise people will just quickly scroll and not listen to that. But then as a

user kind of of these things I personally don't feel offended about that. I feel like it's just the reality of like you know attention attention game that we live in and uh there is some truth in this obviously there's some overstatement but there's some uh truth in that as well and uh you know I personally uh I I try to make a judgment from myself for myself right and if I see something crazy I try not to jump to conclusion right away but maybe if that's something that is related to me or like I would use I would try to use myself and then make a conclusion after. So no I mean so I guess at the end of the day right there is a little bit there's definitely some overstatement happening right now and people are just overclaiming results but in some way there

is something interesting to learn and on a high level I would say that when it comes to let's say coding and research right uh things are getting faster more automated and uh uh you still have to rely on humans you still have to use like human like taste and opinions for that but you got to keep using these tools right whether it's for let's say writing for research for coding for whatever uh like uh podcasting etc there definitely makes us more productive and makes it more fun >> what are your favorite like uh just I mean seems like claude code is pretty nice to you but what are your favorite like AI agent tools if you're using any >> so in terms of the AI agent tools uh let's say for coding I I I like cursor like I like cursor as an editor. It's

like very convenient to use like for the development. >> I like um like I used the lovable before and I I find it more fun like and like the designs are more interesting than compared to let's say bolt or vz or other. I haven't tried like a replet for example, right? uh club code is something like for example I'm experimenting with I don't have an opinion yet but it seems like very promising from the uh first results of course like gem there's a Gemini CLI there's like a codex CLI that I still haven't got time to do and uh I haven't I haven't gotten yet. Uh I also use like maybe not agents but let's just say more like copilots for writing. So for example like for the sister product of sourcely is yumu and it was like originally meant to be as like a

notion like editor for academic writing and I personally use it myself whenever I like write something uh I just like go there you know I select parts of the text to edit it or like try to add my voice to it as well. I think that uh writing maybe it's not as agentic but like writing is one of the things where AI or like let's say generative of AI made made a big difference and then yeah like you know as much as I work on sourcy I also like using it I use it myself too like for example I was like recently on a long flight to Asia and uh like I was looking up information where if you wear the I forgot what it's called but like the for on your legs like you know if you sit for a long time uh your

legs like bloated >> h compression sleeves compression pants >> yes compression compression pants yes so compression pants I was like looking up on sourcely what is the academic sources on like study whether compression socks or compression pants make a difference on you kind of like >> what did you wait I'm actually curious what you found cuz so I'm a former like uh track and field athlete and I and I found like very conflicting uh research on this when I looked into it for recovery reasons. I'm curious what did you find? >> So for I was only looking for the airplane like just like whether the compression only makes a difference for like the airplane like you know for less like less bloating and then yes it does the research actually like a majority of the reference actually confirm that it the comp wearing compression socks

improved. What the interesting tidbit that I found and through like the summary thing inside the source is that they did there was one paper they did a study on whether sitting in a business class versus the economy class makes a difference on let's say your legs being bloated and they found actually there's no difference. So technically, you know, you feel more comfortable in the business class, but in reality, you're, you know, whether you, let's say, if you wear the compression socks in the economy class or the business class, the impact would be the same. >> Interesting. I bet you is that I think that's probably because of the bloating is correlated to the the altitude more so than it is where your your legs positioning, right? Or am I misspoking? Okay. That's Yeah, cuz you know a fun funny funny thing is like people who

fly private experience less jet lag and bloating because the airspace is like significantly lower. Um >> Oh, interesting. >> Yeah. So that's why like people are able to you always ask questions of like how are these people who have all this money able to it's like well oh they have the whole plane to themselves. I'm like not actually cuz I looked into it and the reason is because the >> they're flying lower airspace. Yeah. They're lower airspace so it's less compression on the Yeah. So interesting >> that study was looking only at the commercial flights like commercial flight like a business versus this and yeah that was interesting and uh >> yeah that's that's kind of like what I I like using and uh I mean for me it's personally kind of like as a user of the tool because like okay I want to

see more like academically vetted source and compare it and yeah like uh obviously like it was interesting also like you use chat GPT I use it too as well like you know for like generic maybe like a pop culture type of information and something like that just to just to get a quick quick fact like generic fact about something. >> Okay. What other cool use cases have uh you heard from your customers that um they found good information with your product from? >> I think that uh in general like the way I measure it like implicitly because like what I find is that the people who like it the most they're like use it the most. And one of the ways to for academic sources to verify it is is export. So let's say when you see the source if you want to add it

in your paper is you want to kind of export it meaning that like when you export it you get like a full author list you get like the right formatting to put it inside your paper. So one of the ways like I track it is obviously one is a token usage but also export usage because if they export the paper it means that it's actually relevant for them and they're trying to kind of save it in their library or kind of import it into their paper and academic work in there but I would say the primarily like uh people who like I talk to what is there helpful for them is either for academic right so they're like okay you know I'm writing the paper. This is my background work in my paper. I'm just trying to find more sources related to that. So they

copy paste their uh let's say background work which is a couple of paragraphs and then sourcely gives them let's say additional source that maybe they haven't seen or they haven't considered. So it helps them with uh uh like a research work. And then the other type of usage that I see which was cool and and expected uh this year was like people like let's see fitness or nutrition right they need like more evidences to things like for example I told you about like wearing Russian socks in the flight right they have all these type of statements that they come in and they would like to do uh a search search on this like whether whe whether the academic literature agrees with that or not and uh yeah and kind of what I'm realizing and like seeing people use the product is uh and what's kind

of like the agent form coming in is on one hand like uh you kind of people debating about like you know the user interface of Google moving more like a chat GPT where everything is a chatbased information right but I think that what people don't really pay attention to is the other way around is what if people search for information. They see the sources but then the agent takes more time to understand these sources and present the information to you inside these sources in a more comprehensive way. So like what I mean by that is like the way the Google search works right now just tries to be as fast as possible maybe on the surface level like or let's say like source it give you the right information but imagine the agent >> trying like quickly scanning through thousands of papers reading through each

of them and then after reading that information telling you okay here are the 10 sources that are the most relevant to you here is like the kind of like a copype verbatim the citation from these sources that I found very relevant to maybe a very like a niche statement that uh you're asking about. And here you go. If you want to do more research, click on the source and then you can kind of like uh uh read and then uh bookmark it for yourself. >> H interesting. What is the I mean I already asked you a personal thing of like what is one of the examples that you found but I guess what has been what has been kind of like hard when it comes to making this product when it comes to meeting people where they're at because I thought that was an interesting

point about how you look up stuff with Google and whatnot like um my actually you know I want to ask this because I think it's probably fair to you accuracy right for you is probably big I'd imagine um I don't know if you're familiar I I I don't know the number, but I know from experience of doing Google searches with the new Gemini summary. Maybe it's right 70 65% of the time. I don't know what the number is, but with niche subjects, it's kind of off base a good amount of the time on the answer and it usually uses like Reddit and whatnot. Um, what are your thoughts in general on not I don't want to call that hallucination so to speak, but lack of like full context answers with AI. Um, and that kind of being just accepted by everybody is like truth even

though it's not necessarily always the case. Yeah, I honestly think it's not necessarily limitation of the technology but the limitation of the setting like let me explain right like for example take sourcely and then we take like a Google search or like other searches right >> I think everyone has their own expectation and that's what I find everyone has their own expectation of how the information is presented to you and then the product itself has their own constraints what they take in mind, right? The constraints could be whether let's say the latency, the cost, how much cost and like all the other constraints, right? Like the how many people they have like and so on and so forth, right? So I think that when it comes to Google, right? Like just the Google search like I think that they're trying to have a good search

for everyone, right? So that the average query gets answered, right? But then when it comes uh one step ahead like maybe you want to find more scientifically baked evidences right you go to maybe something like you know like sourcely or like other products then you get some information and then I think the there's a a big factor of it is personalization maybe you like certain type of like way like certain sources to present like for example like certain people want to see the sources only from top journals or the ones that were peer-reviewed and they feel more comfortable that you know with this it's like actually correct fact but others like for example in computer science because people are moving a lot more by just publishing blog posts or putting the preprints on archive like it doesn't really matter how to say whether uh paper

is published or vetted by the reviewers rather than like you know if people talk about it or like you hear it somewhere and then it shows up as you're completely fine with that. So, what I'm trying to say is that I feel like a lot of the problem is about personalization, too. And then maybe how you start using it and then the more you use it, if you kind of implicitly capture your preferences and then personalize it more and more to you, that could be that could be an interesting opportunity. And yeah, that's kind of like how I think too of the of the products. And I feel like one of the things that enabled through these LLMs is those LLMs implicitly reason about stuff, right? Because before previous discriminative like classifiers, they couldn't really reason about stuff. They were like, okay, this is the

data we're trained to do this way. These morals can reason better. And that's why I think that the products might change in a way that they're kind of personalization baked more inside of them because they rely more on these LLMs that can reason. And once you sign in through the some usage like just getting some summary about you it will be more personalized on like how how like how you use the product. >> Where do you think this level of reasoning is going to improve? Like uh if I'm not wrong I think the quote IQ of these models is in the like 120s 130s now on the reasoning end. Where do you think this like reasoning is going to go? you know, because we've kind of had a drastic improvement because reasoning models didn't come out, I don't want to misspeak, but I don't think

until like April, >> maybe March, is that around? >> I mean, if you could say that there was like chain of thought. Uh, I don't know if chain of thought that was like precursor to that that the models kind of uh like implicitly like think in that scratch patch and then they give an answer and I think these reasoning models kind of train with that. So they're kind of trained the process to be a reasoning from from the start. And uh yeah, honestly I think for me I have a conflicting feeling about these reasoning models because um on one hand they're actually much smarter and like they're actually think a lot better than before especially in like math type of problems like if you need to kind of juggle a couple of facts together like a Lego block you know you kind of need to

juggle a couple of facts and then combine them together to give an answer. they're much better than before. But then on the other hand, like the way humans get smarter is different. I feel like it's like a stair. You know, you learn basic math, then you get maybe to a bit more advanced math, then you go to school, you get calculus, then whatever you get like a graduate levels. But once you're good at the graduate level levels math, it doesn't mean that you can't really do the basic math and basic facts, right? It's more like a stair that you got there versus with these models they can get 130 IQ on like some graduate level math and at the same time it doesn't guarantee it. Yes. Exactly. Exactly. So it's like uh maybe we think about them in uh uh we should think about the

them in the wrong uh in a different way because IQ is like something that we think of humans but maybe for models we should also think of like consistency or like just other things baked not just like able to answer some like very hard or niche question. So I don't know. I think that uh it's still up for the debate and like the way how we think about these models and how we think about intelligence. Of course comparing it to the human intelligence is one way but I think that we should also think about the intelligence of these models in another way. And I feel like it's not really fully fully defined. You know, it's an interesting point. Like the reasoning vitals can't answer the question, how many Rs are in the letter strawberry, >> right? If I'm not wrong, or how many letters are

in this word >> as well. Um, so yeah, that's a good point. The ways that people learn is different the ways models learn. >> Um, >> I guess when it comes to to your product, like what are some of the things that you would like to add moving forward with the improvements that are being found and models that exist, you know, in the last year? Obviously, they're getting they are getting smarter. Obviously they can't tell us how many uh ours are in strawberry but you know what have you taken advantage of and what do you think um your product can continue to grow with the improvements of AI in general? >> Yeah. So one thing that with the models like people maybe talk not don't talk as much about is the context lag right and what do I mean by that is like let's say

a year or two ago the context length was let's say 8,000 tokens maybe 16,000 tokens right it was like relatively small but now the context length is getting bigger and bigger and then they're getting better at the longer context length which means that the let's say academic paper or like the source which is typically a lot of tokens tokens can be let's say usually like 20 to 60k tokens. these models are actually understand a lot more and what it leads to is uh more and more agentic search and what do I mean by that is rather than just like looking at the high level information and then kind of understanding that okay you're talking about let's say flights and compression socks I'm going to give you let's say highle sources that mention that or like on a high level semantically and syntactically related to that

what can be done now is I'm going to have a certain number of papers that I think are relevant and I'm going to actually screen them. I'm going to read them and with these models who are like much better at like longer context and understanding these papers I'm going to screen them and I'm going to give you first of all a much more relevant understanding like much more relevant search results of these papers and on top of that I am going to give you already kind of chew some of the information on how it's related from these sources and I'm going to present it to you that okay here's the check mark why this paper is relevant. Here is the copy paste evidence of it. You can kind of like see it in the screenshot that I captured on the link and now now it

is it is uh more relevant uh more relevant to you. So yeah, I would say on a high level like the way we consume information kind of improving thanks to these models and yeah there's still a bit of a room to explore and experiment on how these agentic models that understand context better can chew this information better present it to you in a more maybe readable or a more comprehensive way and then you can kind of continue doing your research. Well, my question is like so I'm a little bit confused on the context window thing because I had a previous guest um last week on and the context window these numbers are getting bigger and bigger and bigger, right? But when it's asking a question, for example, you could take a huge uh his example, I'm trying to remember his example specifically. I guess I

can long story short, he put a text blurb inside of a Harry Potter book. You know, the Harry Potter books for all intents and purposes are way smaller than the context window of Jeep of Gemini. And he put a text blurb in there saying the mountain has a coin on it or the the coin is in the mountain in the cave, right? and then he he puts it into Gemini which claims to have this context window that has the ability to analyze it all but it doesn't give the answer correctly. Why is that the case? Well, I think what what is referred to is you can kind of have a effective context window is like let's say on paper how many tokens you support and that context window can be let's say based on their hardware or like whatever the offer is right but then

there is also the reality is how well they perform with that context. >> Okay, that's actually good. Then maybe the thing is it's just in reality in practice it is not like as well. It's like for example like how they claim with the iPhone batteries right they claim oh iPhone or like a Mac Mac batteries can hold up to 24 hours. Yes they can but then if in the the setting where you're not running anything maybe you just put the uh your laptop in the certain environment and that's kind of the battery can hold up to 24 hours. But then realistically you're going to be running several apps and so on and then the battery time is actually 10 hours. Uh so it's like a similar analogy here but what I would want to say is the context one thing is like the actually working

well with a longer context and this is improved and I think like practically >> Yeah. No, it's better. Don't get me wrong. It's better. It's better. So you're just basically saying the >> the truth versus the real the the statement versus the reality is essentially a >> um the effectiveness of the context window. The context window is getting longer but the presupposition that people are having is that it's going to be as effective as when it was a 2046 context window. >> That's not the case. >> Yeah. Okay. Yeah, that's a Yeah, that's a good point cuz in 2022, the fall of that, like I remember distinctly chatbt's 3.5 turbo had like a 2090 or 2048 context window >> and I think what ended up happening is that all of us saw a direct correlation between effective context window at the time and what you

were putting in. So your claim is basically yes, Gemini has a million context window, but the efficacy of that length is just not as um pinpoint accurate as it was when it was at 2,000. >> Yeah. >> Okay. >> Think of this layer, right? Let's say this is let's say if you take X-axis, which is the context window, right? And then on Y-axis is your accuracy, right? Obviously, you degrade >> as you increase the context length, but what we also care about is that plot being a little bit higher. Meaning that let's say at the 100k token accuracy, you are let's say quite a lot better than you were a year ago. And that's what matters because a lot of the time you're not going to put like a whole uh Harry Potter books into the context. You're going to maybe practically put a couple

of papers or a paper which is going to be couple of thousands couple of tens of thousands of tokens. And you understanding that a lot better than the year ago actually makes a difference makes a lot of uh practical difference for people. No, that's a very good point. Um, I guess I hadn't considered it from that perspective like the con in the I I just made the assumption it was going to find things with precision, but no, that's that's a good that's a very good point. Yeah, it's the the claim, I guess, has never been it's going to know exactly how to go about this every single time. Interesting. Um, okay. Is there any other thoughts that you have generally about kind of where um AI or anything like that is uh is going for your business or or because I I we're kind of

getting close to the end of time, but I I really appreciate your insights here and I'm I'm finding some good answers to some random questions that I think the audience probably has. Um I guess what are you most excited about for for yourself right as as a company in the next in the next calendar year? What's like one feature that you'd love to add or that you're working on that you're excited about um that people would be interested in? um so they could check you guys out. I think one one thing that I'm excited about is making it a little bit more accessible for people who are not necessarily let's say academics or like uh power users right and you know power users they come and then they want to know sources but they also want to read it to them right but not everyone

wants to do that so as I mentioned before like you know making uh making the search more accurate and also like digesting the information in the source and then putting it in the main window is uh is something that I'm excited about and uh like the deep search in particular right where it's like that agentic search that goes and like as I mentioned like screens a lot of papers then reads them and then be like okay this is more relevant to your very detailed statement like you know 10% of something increases this thing by 5%. like that that type of thing is only possible through agents and through being able to kind of screen and read the papers and then be able to say whether this statement is true or not. And then I think that you know overall as these models get more accurate

in like you know just comprehending the information uh there's on a high level an interesting experience right as I said like how you present that information to people and let's say maybe not just necessarily being uh passive as in like you come to the product and then you show the source but be proactive as well like maybe you can have alerts set like okay for example you're very interested in uh let's say performance let's say you're very interested in like GLP-1 drug or like you're very interested about certain protein or certain molecule or certain technique like a low context models you go you subscribe to it and uh you know every week or every month you get an alert which is like depending how you like it to be written like maybe you just want a couple of bullet points maybe you want a full

academically written type of thing you get an email with like those insights about how this thing evolves as well. So yeah, there is a lot of interesting room to to improve and make it better. So yeah, excited about that. >> Very cool. And I guess my last question, I didn't even bother to ask you and I apologize. What um what amount of people are you working with at your company? Like is it just you or um do you have a team? Yeah. So, it's actually one of these things which is uh like is one of the products that we're working on like part-time and then it's just me kind of like as an AI person who is kind of thinking on like how to implement the algorithms and there's like a person who's like doing the front end and then driving that and then there's

a designer that you know come in and out depending on the requests to do and yeah it's like I think like it's like one of these cases where it wouldn't be possible few years ago and thanks to AI were a lot more efficient in a sense that of course like we wish um how to say there's always more to do like you know you always think that I can do more in this time but then at the same time you're like kind of more hyperefficient with a much much smaller team of like pretty much like kind of two people part-time and then help coming in in and out and uh sure >> I don't think >> with that we are behind others we're actually in some way maybe we're coming in more focused on what we want to do more opinionated and in some way

we're winning this way because let's say source is like if you search on Google like find sources on the source is one of the top three top two results that come up on the space so people trust it people click it and people use it >> very cool well I appreciate that context I think it's always good that um you know uh something that we try to mention on this podcast a lot is that AI is actually really helpful for reducing the overhead of the amount of employees you have and stuff like that so um it's you know it's some people are interested in in where that can lead them as a small business owner. So, I just wanted to ask because it's, you know, it's it's definitely something that um the more people I ask, the more answers I'm getting that seem to consistently

be that and I think it makes it much more efficacious of a of an opportunity for somebody who doesn't have the capital to go out and hire a bunch of people to to start something themselves and do something themselves like this. So, well, um, with that being said, just, you know, plug what you want to plug and, uh, we'll close things out. >> No, I mean, thanks a lot. It was a great conversation, I feel like, just overall about AI and the product itself. So, I enjoyed it. >> Awesome. Well, I appreciate it. Make sure to go check out sourcely.net. sourcely.net. It is the best place to find sources in seconds. So, if you like this episode, please make sure to hit that like button on YouTube, hit the subscribe button, as well as leave some reviews on the Apple and Spotify podcast platforms. Thanks

again to Elman for being on this episode, and we'll see you in the next one. Peace. See you.