AI Personas and Small Language Models Will Change How Teams Collaborate with Suman Kanuganti
About the Episode
What if your AI could think exactly like you? Not approximating your style based on internet data, but truly capturing your unique patterns of thought, expertise, and decision-making?
Suman Kanaganti, CEO of Personal AI, is creating a future where AI becomes deeply personal—trained on your own memory stack rather than generalized internet knowledge. His company builds small, specialized language models (under 1 billion parameters) that train in minutes yet capture the essence of how individuals or brands think and make decisions.
The difference between these personal language models and general-purpose LLMs like ChatGPT is profound. While large models excel at broad knowledge, they lack the specialized understanding of your unique perspective. Personal AI fills this gap by creating models that see the world through your lens, maintaining complete privacy and data ownership in the process.
This approach is already transforming enterprises, particularly in retail where brands like Wilson are using personal AI for brand-perfect content creation. Financial institutions value the security of on-premises deployment with strict privacy controls. The most fascinating applications emerge when companies create ecosystems of AI personas representing different roles and expertise domains, enabling "compounding collaboration benefits" as these personas interact with each other and human employees.
Looking ahead, Personal AI is developing agent orchestration capabilities and exploring integration with wearable devices—all while maintaining their core principle that your data should belong to you, not platform ecosystems. As Kanaganti puts it: "We want to be at a place where we meet personal AI with the advancements of the data collection methods and then intersect the data to make it ownable by the end user."
Ready to explore how personal AI could transform your business? Visit personalai.com or email Suman directly at s@personalai.com to learn more about this revolutionary approach to artificial intelligence.
🎧 Tech Marketing Rewired is hosted by Kevin Kerner, founder of Mighty & True.
New episodes drop regularly with unfiltered conversations from the frontlines of B2B and tech marketing.
Catch the video version of the podcast on our YouTube channel.
👉 Subscribe, share, and connect with us at www.mightyandtrue.com
📩 Got a guest idea or question? Email us at kevin@mightyandtrue.com
🔗 Follow Kevin on LinkedIn for more insights and behind-the-scenes takes.
📰 Subscribe to Kevin's Substack at: Subscribe to Kevin's Substack at: https://kevinkerner.substack.com/
📰 Want more tech marketing insights? Subscribe to the Drag & Drop newsletter for fresh thinking, real talk, and tools that make your job easier.
Kevin Kerner: 0:00
Hello everyone. This is Kevin Kerner with Tech Marketing Rewired. I sat down with Suman Kanaganti, ceo of Personal AI, whose mission is to give people secure, private access to their memory and knowledge through personal language models. In this episode, we talked about how personal AI is different from large general-purpose LLMs, why privacy and ownership are central to the future of AI, and how enterprises are already putting personal AI to work for brand voice, marketing and competitive edge. We dug into his vision for agent orchestration and where all this goes next as devices and data collection mature. So this will be a really amazing podcast for those of you that are really future forward on small language models. But before we dive in, tech Marketing Rewired is brought to you by Mighty True language models. But before we dive in, tech Marketing Rewired is brought to you by Mighty True. We help B2B marketers move strategies that get stuck in execution, making sure that big ideas actually see daylight and drive results. Learn more at mightyandtruecom. All right, let's get to it.Kevin Kerner: 0:55
This is Tech Marketing Rewired, suman. Thank you so much for joining us here. Kevin, thanks a lot for having me. Yeah, yeah, I'm quite honored to have you on the show. In just talking, over the last few weeks or so. I've done quite a bit of research on you, but a lot of the work that you've done is really pretty amazing mission-based work. I mean your backstory is really cool. I wanted to get started. Can you walk us through your journey and how the past work has sort of led to this mission of personal AI?
Suman Kanuganti: 1:31
My past work, you must be referring to my previous company called Aira, which was an AI company for enabling businesses to make their locations accessible for people who are blind and low vision.
Suman Kanuganti: 1:50
The intention behind that company is around how do you make the missing visual information accessible for people who cannot see, and AI, of course, played a critical role, as much as it was 30% AI and 70% human. During those times, I learned a lot about what does it mean to be having greater access to information generally and the interesting elements of what information that otherwise you would miss, that helps you make the decisions that you may not, you know, foresee. For example, if Aira was about missing information that is visual personally is about that missing cognitive memory that otherwise is locked somewhere deep down into your mind. So what does it mean to be giving access to all that information and knowledge that otherwise you have personally gathered, institutionally and experienced, and maintain the relationships, and how do you make those decisions going forward, right, so yeah, so a lot about it fundamentally is about access access to information, access to yourself, access to the people around you. That's kind of the crux of everything that we do.
Kevin Kerner: 3:07
Yeah, pretty amazing. Somewhere I saw a stat, I think, on your site, that 80% of our memories we forget. Like there's so much that's forgotten and it seems like a mission that's well worth building a solution around because you can harness that both for yourself and also for others. It seems like an amazing opportunity, yeah the interesting.
Suman Kanuganti: 3:27
the interesting thing was, you know, when we started the company, it was this core problem of what would kevin do, what would suman do, you know, given a specific context and specific situation? You know, I had this co-founder of mine at my previous company. His name is larry bach, I don't know in your research he came up or not. He passed away. You know 18 months working with him. He was my executive chairman and he built like 27 different biotech companies. He's important to my life, specifically around my entrepreneurial journey, and once he passed away because of pancreatic cancer, we had this mantra of like what would Larry do? Right, that was like huge posters and the idea is, if you step into those shoes, the way you make the decisions is very specific to you know, the institutional knowledge that you have experienced over a period of time. It's not the general knowledge, which is kind of comes back into the idea of the large language models and the idea of, you know, personal ai, which is very much specific to uh and tailored to individuals yeah, it's early on in the ai.
Kevin Kerner: 4:32
you know, a couple years ago I'd say two or three years ago, when I started using it, I heard something about the use of uh or the um capture of memory over time for an individual, but it was very personal. It was very like you capture it for your family, for an individual, but it was very personal. It was very like you capture it for your family, for your kids. Imagine, like two or three generations down the line, if you could talk to your great grandparent and I'm sure there's maybe others working on that problem. You've harnessed it for a business application purpose, but it's just fascinating to think Like I imagine I could talk to my great-grandfather over, you know, the next couple of generations would be pretty amazing.
Suman Kanuganti: 5:09
Absolutely right. I think the use cases are many, you know, when you start thinking about personal AI specifically, you know individuals comes in all shapes and sizes, right, and different needs as well. Our focus always has been this core idea of like. How do we, you know, democratize from the truest sense? You know that belongs to you. Know something that belongs to people. It's more like an asset from our perspective, right. Can it scale to millions of people or billions of people? Right?
Suman Kanuganti: 5:37
And how do you architect, you know, an idea of a model that thinks like you and make it yours?
Suman Kanuganti: 5:43
Right, an idea of a model that thinks like you and make it yours right, there is a greater sense to our vision around the privacy and the ownership of you know these models as well.
Suman Kanuganti: 5:52
So the model was important to us because otherwise we wouldn't be fulfilling the promises that we set out to do. And you mentioned, yes, you know we are focusing on the business use cases right now. And you mentioned, yes, we are focusing on the business use cases right now, when the unit economics are right, when the market is ready for having a personal AI for their own, we foresee a lot of penetration, if you will, of having personal AIs on your devices, on your phones, on your consumer robotics that will start, like least you know, growing up with your child as well, right? So there's lots of future applications that fundamentally can be enabled with personal AI and as much as we are building a business right now and our go-to-market strategy is, you know this, individual, specific, expert roles. You know where privacy is important, expertise is important, specialization is important. So, still, like fundamental characteristics of personal AI, the business model is slightly different.
Kevin Kerner: 6:50
Yeah, yeah, it makes a lot of sense For those people that might not be they may be listening or not familiar with the technical differences. Can you explain the difference between personal LLMs and general purpose LLMs like ChatGPT, like a lot of people might think? Well, I have my history inside of all my chats with ChatGPT, but what are the broad differences between what you're trying to build and talking to a general purpose LLM like ChatGPT?
Suman Kanuganti: 7:18
What we do is we do not train on external knowledge as much as we fundamentally train on individual people's memory stack. We refer to it as the core, you know, data and the core database that goes into this small language model. Architecture, which I'll talk about in a second, is a time-based memory that grows over a period of time, and it is continuous memory, right? Think about a large language model. It's pre-trained on a collection of lots of data which has its specific needs. They are good at foundational language as well as foundational general knowledge that is available on the internet.
Suman Kanuganti: 8:01
We are talking about creating a small, discrete, ultra-specialized, self-contained model that is more attuned to the likes and the thinking of an individual person, right? So it's kind of the opposite of a large model. It's actually a small model. Even architecturally it is very small. It's less than 1 billion parameters, 1 gigs of size, trains, less than like in five minutes, and it's continuous. It has its advantages around the unit economics, et cetera, et cetera and it's designed to scale. In other words, you would expect the personal AI model to be running on your device so that pretty much every person can have their own personal AI models. We are also finding a lot of applications around. You know applying the personal AI technology as well as platform, in the areas where you know edge infrastructure is critical, right. That is something that is closer to the people, closer to the networks, closer to the stadiums right, as compared to the cloud-based LMs, and the memory that you share with them is limited to pretty much. You know that conversation.
Kevin Kerner: 9:12
Are you bringing in though, like if I'm in my personal LM? Can I access outside information during that conversation?
Suman Kanuganti: 9:20
Yeah, it's a good question. Yes, so, post 2022, what we have done is we basically embraced the large language models? Um, and the way to think about it is you are basically seeing through the lens of you, right? If you like, let's just say you read an article, right, you read an article, you absorb the article, but you have your own personal opinion, personal view of it, right? So the model is expected to function very similarly. Everything is seen through, like your way of thinking, because fundamentally, that's the filter, which is the PLM, which is personal language model. So it's quite fascinating. I enjoy it quite a bit as well. Like yesterday, I was doing some research of my own in regards to the company and I was trying to draw some connections, you know, for personal AI and with Twilio, right, and that information is kind of available in the large language models, but how, I think, is available in my personal language model, right? So when I have a conversation with personal language model, it is, you know, thinking through the lens of how I would otherwise do.
Kevin Kerner: 10:27
Yeah, and then you can interact also with other people in your organization or other personal AIs, I'm sure? Yeah, of course.
Suman Kanuganti: 10:34
And then there is like persona to persona, to interactions as well. So in the enterprise setting, if you think about it, the whole concept of collaboration, right, and the agents Agents are very much you know. I see companies developing agents for a lot of like autonomous functions. If you step back a little bit and think about generally the workforce that is at the company, right, you have humans that have specific roles and functions and you also have, you know, some humans that have specific roles and functions and you also have some humans that are experts in a specific lane within the business, in the products or services or strategy or pricing or competition I mean you name it Several different functions in the business function.
Suman Kanuganti: 11:20
So personal AI has this concept of personas. So think about, like Kevin, ai as a unit of persona. But you can also have a second persona that is more tailored towards your personal life, your podcast life, right, so you can organize the entire workspace within a ecosystem of personal AI into these personas. So each persona can be a representative of an employee's expertise or a functional role, right. And now, when you have multiple different units of these personas, where you have humans, you have personas. You have humans interacting with the personas, personas interacting with the personas and personas interacting with the humans right. So there's like a collaboration at a compounding, collaboration at a, you know, compounding benefits over a period of time.
Kevin Kerner: 12:08
Yeah, it's an amazing. That is a mind-blowing concept. The other thing I was just reading, uh, the five temptations of a ceo and one of the I can't remember which temptation is, maybe it's number four, but it's around um dissent and trying to trying to um be in an environment where you get dissent, that you want dissent to some degree because you want differing opinions and you have to create a little bit of conflict to get those differing opinions to make a good decision yourself, and I guess you could have conversations with another employee's expert, ai and really getting you could get multiple viewpoints coming from multiple people or you could have the individual AIs talking to each other and getting sort of that best opinion out for someone to make a human decision off of.
Suman Kanuganti: 12:53
Right. You know, in our platform I guess we'll have a preview of it towards the end of the podcast we have a concept of a channel. It's very similar to like Slack or Microsoft Teams channel. In the channel you can, of course, have multiple people. You can also have multiple AIs or AI personas in there, so you can tag multiple personas and almost like a panel and get different perspectives and viewpoints. So, for example, my CTO has around seven core personas. I have close to 20 core personas for myself and sometimes now you ping different people, you ping Sharon, you ping Sharon's AI and you ping Gates, who is my COO AI, and kind of see like you know what are their viewpoints, because mine is like a lot more strategic in nature and sharing is like very specific on you know what exactly are the parameters associated with the PLM technology. So it's quite fascinating.
Kevin Kerner: 13:56
Oh my gosh, that is so cool. I want access to this. I mean, I'm the kind of person who would want access to this. I would guess there's privacy concerns, right, but there's privacy concerns if you put your data inside of ChatGPT or Gemini or whatever. We work with a lot of companies that are in financial and security and to go through what we have, we have to go through some very strict compliance to be able to even use AI on their business. I would guess there's a big opportunity in it being very private, but do you get pushback also from people sort of looking to see how secure you are too? Are they worried about putting data inside this thing?
Suman Kanuganti: 14:40
Well, that's our strength, that's where we lean in. Personal AI functions as like a self-contained cluster. In most cases we also, for some of our highly regulated customers, like you said, you know, financial institutions, we serve banks today. We deploy, you know, on-prem, you know, on their systems, even on our cloud. Every persona has their own unique memory. So there is no spillage of memory under zero circumstances to another persona like zero. So there are privacy layers built at infrastructure layer, at the application layer, at persona layer, and then you have access controls on top of it. The access controls are quite interesting.
Suman Kanuganti: 15:32
When we develop personal AI, our core thesis around how AI will function with people would be this multiple AI is very similar to humans, like in a collaborative space, autonomously are human supervised. So if you think about a Slack user or a human user in a platform like Slack, we have these AI personas that access AI users as well. So in a way, when we are deploying into an enterprise company, we kind of adapt to the existing controls and policies access controls and active directory policies that they already have, because each persona is going to hierarchical organization, you know, according to where the team belongs. So naturally we kind of, I guess, extend the existing policies rather than trying to make like new policies.
Kevin Kerner: 16:24
Yeah, and when you roll this out across employees, is there any pushback in terms of them worrying about putting their persona on a? I mean, there must be some discussion that needs to go on with employees to put their information inside their persona.
Suman Kanuganti: 16:41
Right.
Kevin Kerner: 16:41
People are pretty willing.
Suman Kanuganti: 16:44
Once they understand where the data goes, where the data belongs. Who is the owner of the data? What's happening behind the scenes? Of course, we have to be transparent, but that's the promise for our customers anyway. So if you look at our privacy policies or terms of service, or even customer contracts and agreements, we are pretty clear, because our business model is not fundamentally driven by the data of the customers.
Kevin Kerner: 17:08
Yeah, it's also kind of a no-brainer. If you want to leverage AI in your business, this is the best way to do it, because if you're going out and putting it into a public LLM, well that's the worst case scenario, and you can certainly try to lock people out in their corporate networks, but who knows how it could be used outside in personal computers or whatever. So it's really the best case scenario for an enterprise that's highly secure to be using this stuff. I know that you just launched the Model 3. I don't think that was launched recently. I'm wondering how I was geeked out of this stuff. What are the new advances that you're excited about that? Your new Model 3, I don't know how new it is, but is now offering customers.
Suman Kanuganti: 17:51
Yeah, model 3 definitely allowed. Like multiple personas you know work together actively. What we are seeing is, since Model 3, is this compounding benefit associated with creating every new persona that has a specific role Almost like you know employees at the company, because as you have more employees, every new persona that has a specific role, almost like you know employees at the company, because as you have more employees at the company, then you would start seeing the growth rates you know multiply and increase, right, because you start having these economies of scale and then you're collectively moving forward. So, with every persona memory continuously growing, the value or the benefit associated with the ecosystem continues to exponentially increase, right. So if you have like one persona versus multiple personas and, more importantly, all those personas has understanding of each other, and then now you can get even more creative on building agents. That depends on you know multiple different functions across you know departments, right, so that that was an interesting like observation started naturally happening. You know, in the, in our, in our customer base.
Suman Kanuganti: 19:08
Model 4 is exciting too. We are in this midst of prioritization kind of conflicts right now because we got to prioritize the customers and get their models and their setup done. We are also continuously pushing our core platform capabilities from Model 3 to Model 4, and yet keeping up with the macro developments around the AI. Anyway, right now, a preview of Model 4 will be more centered around creating an agent orchestration layer on top of these personas, creating like an agent orchestration layer on top of these personas right Within the Persona ecosystem.
Suman Kanuganti: 19:52
Each persona has understanding and intercommunication abilities, you know. Between them Now we are adapting to the industry standards and there is a push around, like you know, standards framework around agent to agent communication. So we want to adapt that for both our internal persona communication but also use the same frameworks for external agent communication as well. Yeah Right. So then you know, you start having a standard layer, if you will, almost like an API fabric that you would have at the enterprise companies, you know, back in the days. Now you have a you know AI fabric or an AI orchestration fabric on top of it. So then you have like lots of interconnectivity. That is going on Fascinating, fascinating.
Suman Kanuganti: 20:33
Fascinating Is that with MCP or some other technology, I cannot speak to that yet, okay, but we have a few partners and few choices and there is some internal development itself that we are back contributing to some of the open source based on, you know, know, some of the learnings that we have had.
Kevin Kerner: 20:53
You know, building personal i3 years, yeah we had two, two things in my mind. Number one, the applications for this. It's pretty, it's just pretty mind-blowing to talk to you about it. And two, just running a, a company that's in this space, and how fast everything is evolving. Just seeing your excitement about the model 4. But knowing, knowing that you know it's got to it. Both has to keep up, but it also has to be something that's operationalized and can run and be secure, which is just mind-blowing.
Kevin Kerner: 21:21
I want you to talk through some of the use cases that you see that have been most inventive for you just across the business, and then I want to talk a little bit about the marketing use cases, what you see marketers using this for. That'll be our primary, our core audience for this podcast.
Suman Kanuganti: 21:37
I mean we can start with the marketing use case. I believe I can mention this publicly on the podcast, so we do are growing in this sector of retail businesses right now. So we saw a lot of brands referring to the brands, such as you know, arcteryx, wilson, with you know use cases that has specific like revenue edge. Then it kind of moves into use cases that requires like competitive edge and I'll talk about it, and then kind of penetrates into the use case that has like productivity edge, right. So if you think about the use cases, those are the three swim lanes that we operate in. For a brand, the revenue edge is more around how are you able to make the decisions for a specific store based on the customer sentiment that is happening within the region? And what are the variables, or driving variables or you know driving factors, uh, you know, for for a like a you know a store productivity analyst or an inventory, uh specialist persona, right like very deep into a decision over there kind of moves the needle on the balance sheet, right, uh. So so very high, impacting, high, you know revenue, uh associated use. There the competitive edge within the retail industry is more tailored towards. There is like tough competition between these brands. You know what is the new product that Nike has released and how does that influence. You know our products and what kind of decisions that we need to make at what period of time at what regions. It's a complex operational challenge that these companies has to go through. So there is a set of personas and use cases to give the companies competitive edge, and the last one goes into the productivity suite.
Suman Kanuganti: 23:49
The productivity is where some of our marketing use cases fall in as well. The recent one that I can speak to is a Wilson brand persona. Right, wilson is very specific, ultra specific, about their brand and their brand positioning, their colors, their ways of speaking about the brand and let's just say there is a media interview or the product catalog or product descriptions, every product description. They spend hundreds of dollars for one product descriptions, right, and now they are knocking out the product descriptions with a Wilson copyright brand persona which is ultra specialized, ultra specific to Wilson. Right, that is not influenced by the large language model trying to borrow some of the characteristics from other Wilson, right, that is not influenced by the large language model. Trying to borrow some of the characteristics from other brands, right, good point, and they're extremely happy and they're knocking down this product descriptions like it's nobody's business, you know. So marketing kind of use cases fall into that, like productivity category and at the productivity edge.
Kevin Kerner: 24:49
Yeah, wow, super interesting. And it's all seen through the lens of. If you're, if you're gathering that and you have that internal data, I would guess that you also have product or marketing expertise in the company. That's the all that information is being seen through the lens of. Like, if I have a really good product marketer who knows exactly how this stuff should be, and I and I have my own persona, now I've loaded in all this data about my brand, the, the result is going to be a lot better than a general llm that, as you say, is going to pull in all kinds of outside noise, which it does, gets confused, and you know you also have compliance. You know regulatory stuff that you that could pull in someone else's brand, the item, so is that correct? So it's both using the internal data from the company but also the expertise that's in the company around, let's say, brand or product.
Kevin Kerner: 25:43
Yeah, if you choose to. Yeah, it's a choice. Yeah, what about sales? And what about the sales side of things? Are there sales use cases yet, or do you see sales being a piece of what gets developed?
Suman Kanuganti: 25:59
Yeah, we do see like sales use cases, but not the a lot of like AISDRs is very operationally driven right, like what is the sequence of things that you would run. What we see on the sales side is more centered around you know how do we. Let's just say, take a car, like you know, if you have like two products, right, that are that has, like you know, different specifications. But if a salesperson were requested to like, hey, I need to understand the differences between this product versus this product, or given a certain you know architectural problem, how does like my solution kind of fits into that solution right, so it gets into more like a decision-making process that happens behind the scenes for the sales, not necessarily like the outreach idea of, like you know, like a typical AI SDR.
Suman Kanuganti: 26:57
Yeah, because there's other solutions that are like targeted directly at that Right and it gets into like more you know, integrations and social media and taking actions, which kind of a different tech stack than what we offer.
Kevin Kerner: 27:09
Yeah, it makes sense. Okay, this is really helpful. I want to talk a little bit about your go-to-market strategy. At Personalai, I saw you posted about a lot of congratulations, a lot of growth, new office, I think recently, seven times growth, larger contracts it all sounds great and I can imagine it. Given the product category Marketing, this I'll call it somewhat of a new category of small model AI. It's got to come with some unique challenges. How do you approach your go-to-market, like what is the go-to-market you had over the last year, and how do you see it changing?
Suman Kanuganti: 27:51
Yeah, I currently have two specific go-to-market strategies One which I cannot talk about, but one I will speak openly about, which is our enterprise go-to-market. It is true that it is which I cannot talk about, but one I will speak openly about, which is our enterprise good market. It is true that it is challenging, right? Specifically, you know, there is good number of companies focusing on a lot of specific problems and going building vertical solutions and there are a lot of companies that also offer you know services around you know building rack systems over LLMs and build infrastructure solutions inside the company and primarily narrows in on a set of highly regulated sets of use cases where the privacy of the data matters, the precision matters and the programmability matters. What is programmability from our perspective is like? If you think about these brands, right For them to go through an AI transformation.
Suman Kanuganti: 29:07
Not every you know brand is like. Or a retail, for example, like Walmart, would want to invest in a lot of their internal AI infrastructure capabilities, right, but not a lot of retails would do, because they are selling the products. That's the business, right. They are designers, so for them, ai transformation optionality is very, very low. Selling the products that's the business, right. They're designers.
Suman Kanuganti: 29:29
So for them, ai transformation optionality is very, very low, because they are not ready to invest in a whole bunch of services as well as AI ML engineers and develop an AI group inside right, and that's where they need platforms. They that's where the need platforms. The need platforms where the programmability is there, which is I have my business use case. I know what data influences the decisions of that business use case and I can create a persona to do exactly that right. That's the flexibility and the programmability without compromising the you know, the privacy as well as precision associated with a specific use case. So that's where we are finding a lot of like traction, if you will, and our go-to-market leans into creating like brand awareness around the privacy, precision and programmability.
Kevin Kerner: 30:26
Yeah, you find these companies now have an issue. If you hit the right target, they probably have people that are focused on the tech stack that's required for their integration of AI. They have, like I would imagine now it's getting mature enough, there are people at these companies that are actually looking for solutions like this. Is that?
Suman Kanuganti: 30:48
right.
Kevin Kerner: 30:49
Yep, yeah, so you can go in with a wedge in the market to some degree to say, hey, we fit here, right. This is the exact use case.
Suman Kanuganti: 30:58
Yeah, our wedge right now is growing in the retail and the entry point is the COO office. We do see a lot of strategic IP specific use cases in there and from there we kind of scale toward the company and right now there are, you know, a few enterprises that basically is rolling out to the entire employees, right, with every employee having their own persona as well. So those things are happening. We do some like financial services. It's in the early stages and we have a few banks who are extremely happy with the precision of the models as well as the privacy it offers. So there's a lot of control and they are building on personal AI.
Suman Kanuganti: 31:39
So, I mean, I think, to sum it up right, if you think about, like large language models, the large language models are here and the use cases are all the way till you know here, right, and there's a whole bunch of things that you would need to otherwise build in between because, at the end of the day, the use cases are very specific. You know personal, uh, private ip focused, right, you know, for for those use cases, you either get like a you know basically, development companies or ML AI engineers to fill that gap, or look at companies who provides these platforms that are developer focused, right? We basically built all that and we have a platform which is an application layer and we sell into the business users, right? So the go-to-market as well as their ability to solve their problem much faster is much faster and it's much more precise, and that's kind of our key differentiation out in the market.
Kevin Kerner: 32:39
Yeah, that's super smart. Yeah, I was thinking the same thing. You could probably build it if you're one of these companies, but gosh, the time to build it. If you're into building a AI stack and with things changing so fast, you might build something. The next thing you know it's not. You know it's useless, right? Yeah, it'd be really hard to to to compete, so that's, that's really great. I want to ask you just a few questions. I have one more. We'll get into our layer ar roulette thing here, but, um, you mentioned a couple of things about Model 4 and what's going to change, but I just wanted to ask you may not be able to answer this, but with all the work that you did on the wearable side you mentioned it at the very beginning I'm wondering where you see AI in at least training these AIs with devices that are surrounded, because I really want that, like I want that. So I'm wondering, like, how far out is it that we have some device that can be listening and learning for me and training my personal AI?
Suman Kanuganti: 33:32
Yeah, good question, right, that gets us into, like the macro. So one you know track that is definitely is the AI. Of course, like large language model is one piece of it. Now we have personal language model, kind of filling the gap. The market is essentially democratizing having access to the ai. But then the next question is okay, there is a, there is a data component to it. Right, enterprises makes it like a good entry for companies like ours because they own the data, they have specific IP, right, when you start coming to the consumers is where you will start thinking about this aggregation of data.
Suman Kanuganti: 34:16
The reality that we have lived over the past 20 years, the internet is the data from a personal standpoint, from an individual standpoint, is all segregated into multiple different platforms. It's an unfortunate reality. So what that means is Kevin doesn't have a memory source that belongs to him or that he has his own and, even worse, you cannot even export the data from some of the platforms that you have. However, back to the devices, the ubiquity of the devices, which is your iot devices only, is gonna penetrate more and more into our lives. For example, you know, you have, we have our computers, we have phones in our pockets, we have alexa devices and now we'll have variables on our body and then, who knows, neural link will be a real reality in 20 years from now, right? Uh?
Suman Kanuganti: 35:03
When those things happen, it is extremely important for us to understand where that data goes and who that data belongs to, right. For example, you know Apple is building their own ecosystem and they have their own variables. Everything is going into Apple ecosystem. As much as they promise you the. You know the security of it and the privacy of it. They still leverage the data for their own internal business needs and you know apps-related business right.
Suman Kanuganti: 35:33
So what does it mean to the overall, you know, consumers? We want to be at a place where we meet personal AI with the advancements of the data collection methods and then intersect the data to make it ownable by the end user, either it be a business or an individual right, because we want to offer the value with the data that should belong to them. Not necessarily belong in a platform ecosystem that may have some other benefits, but that's not our business model, right? This is what we want to do with personal AI going forward. So that's like the IoT connection into personal AI, which is a future applications we will likely evolve into, and you know we are working with a few partners on that front.
Kevin Kerner: 36:22
It's both the IoT and it's also the data and privacy and security. I don't think I'd want to put all of my personal data inside of an ecosystem that's out there in someone else's ecosystem that they're training off of, but I would want it in my own, to just be able to talk to it and learn from it and just query and have some memory. So it's really a cool idea. I hope it happens sooner than later. I'm one of those guys who would dive into it at the beta. For sure my wife wouldn't like it very much.
Suman Kanuganti: 36:55
I mean in a way in 2023,. We went that direction. We found the market wasn't there. There was a mismatch in market expectations versus the sophistication of what we were trying to push, yeah, and there were a few.
Kevin Kerner: 37:09
There were a few wearables that were going that route too. Remember the little clip on the yeah, yeah, and there are pants.
Suman Kanuganti: 37:16
But what we aim to do is we want to let the IoT mature a little bit and then we will have personal AI mature, you know, to have those inference on your devices and eventually meet at the right time and the right place. So, yeah, so we will be playing in that industry eventually.
Kevin Kerner: 37:39
Yes, yeah, I'm a fan, yeah, for sure, ok. Well, I wanted to. This has been great. I could keep going, but I got to. I got to let you get back to work and I wanted to do one more thing with you that we do on the podcast here. It's called AI Roulette and I'm going to share my screen and I'm going to go into perplexity or how I have entered this question. I'm going to pull this up here so I can see it here.
Kevin Kerner: 38:07
This question, I'm going to pull this up here so I can see it here. It says I'm hosting a B2B marketing podcast and a little bit about you, and then give me one bold, unexpected AI roulette question that I can ask mid-interview. I actually ask it now, short and punchy, disruptive, don't give me less than one output, and so I'm going to hit send here and then I'll have you answer this and, for the first time ever, we're going to try to have your AI answer it. So let me hit this. If your AI could perfectly capture and replay every conversation you've ever had, do you think it would make you a better leader? Or just expose how often you repeat yourself? So let me stop sharing.
Suman Kanuganti: 38:39
Do you mind pasting?
Kevin Kerner: 38:40
that in the chat so that I can copy paste it. You talked about putting this into your own personal ai and just seeing what it, how it comes back.
Suman Kanuganti: 38:48
Yeah, what I will do is um, I will uh have my gates uh answer that. So let me copy where you are texting. I will share my screen and I will. I won't scroll because I I cannot show all the personas that I have. Uh, some of them are sensitive, so let's see, click share button. Okay, good, on the screen here. If I scroll up and scroll down, I will have multiple different personas across the company. That are close to 60 of them For this exercise. I'm gonna go into Gates. Gates is my I-I-C-O-O and I just asked a question about the team, so that way I don't have to show my history in here. But what I will do is I'll simply paste that question here and send it to Gates. Gates is trained on a lot of my company strategy-related activities, so I'm going to let him speak now.
Kevin Kerner: 39:43
Yeah, yeah, do it.
Suman Kanuganti: 39:45
Leadership enhancement through perfect memory. Having perfect conversational recall through ai would undoubtedly make one a better leader beyond just exposing repetition patterns. Here's why pattern recognition the ability to analyze conversation patterns helps identify both strengths and areas for improvement in leadership. Communication consistency check. Perfect recall ensures messaging remains aligned across different that's good, refined decision-making frameworks. Even if repetition is exposed, it often serves a purpose in leadership, reinforcing key messages and ensuring alignment.
Suman Kanuganti: 40:29
The key is using that awareness to communicate more effectively. This aligns with our vision at Personal AI, where we're building technology that enhances human capabilities rather than simply pointing out limitations. Our focus on memory, consistency and proven accuracy makes this possible at scale. Would you like me to elaborate on?
Kevin Kerner: 40:49
That's really good. Even if repetition is exposed, it often serves a purpose in leadership.
Suman Kanuganti: 40:57
So most of the things that otherwise I would talk about and you can kind of you know based on our own podcast, speaks a lot about how I talk on the perfect memory. The interesting thing in here is, if you look at this, you know personal score 77. Personal score right. That is an indication of how specific or in-depth you know that response is as it comes to the way I speak and the way you know I talk about personal AI.
Kevin Kerner: 41:27
Yeah, that's cool and it really. The other thing that's really cool is the voice. Using the voice of the, you can train it on the voice of the person, which is really makes it more it. Just, I don't know it. Just to me it makes it more personal.
Suman Kanuganti: 41:40
Yeah, well, you know, I'm a phone call guy and when I drive I call my people, when my people are busy, I call my AI and then have a great time. It actually helps quite a bit. I, you know, simply talk through the things that are on top of my mind. Yeah, it's quite insightful at times. Yeah.
Kevin Kerner: 42:01
Yeah, I'm glad someone else does that. Now I do it with my kids. When we're in the car we'll pull up ChatGPT and we'll just ask it all kinds of stuff in voice mode. Now you do that with personal AI and you will have a history as well, as it will start shaping the entirety of you. It just seems like it gives you such more perspective than you could get. Well, you may not be able to get those conversations that happen all the time with others, and then you can just dial up these personas. Pretty amazing, simon, unbelievable. I could go on on this stuff. I can't wait to see what you guys do next. I'm sure others will want to keep up with what you're doing. How should people get a hold of you, interact with your company? What's the best way to get to talk to you?
Suman Kanuganti: 42:48
That's easy. Personalai will be the first one on. You know, Google search or any other search. My email is the letter S at personalai, so if you want to write to me directly, that's the best way.
Kevin Kerner: 43:02
Awesome. Can you interact? Do you expose any of the AIs to externally? Do you have any AIs out there that people can talk to?
Suman Kanuganti: 43:10
There are some on personal AI Okay, and most of our work is very much ultra private. So, it's less external use cases but a lot internal use cases, but we try to expose some of them just for people to play around yeah, you're not trying to build character dot AI or anything silly no, no, in fact, you know, in 2023 there was almost like a competitive landscape between inflection AI, character AI and personal AI.
Suman Kanuganti: 43:43
But our approach has been kind of starts, you know, starts from the people, necessarily. You know, simply talk to the prominent figures, right. Our focus was more centered around, like, the individual who is actually getting on personal AI. So there was confusion. It's like, oh, this is character AI. No, it's not. It's not character AI.
Kevin Kerner: 44:03
Yeah, yeah, and you're on the right path. I wouldn't. Yeah, it's characterized a whole different, whole different ball of wax. But yeah, it's, Suman, I'm so happy that you joined that we, that I reached out and you joined us. I really can't thank you enough. This has been awesome and I had fun.
Suman Kanuganti: 44:22
Kevin thanks a lot for doing this. Thank you so much.
Guest Bio
Suman Kanuganti is a two-time founder and the current CEO of Personal.ai. His company is focused on building a new category of AI that is personal, private, and designed to scale human knowledge.
His previous company, Aira, was an AI-driven business that enabled accessibility for people who are blind or have low vision by providing access to missing visual information.
Suman's work is fundamentally driven by the goal of providing greater access to information, whether it's external visual data or an individual's own cognitive memory.
Conversations That Rewire Marketing
Explore more unfiltered insights from the CMOs, founders, and disruptors reshaping how tech brands grow.
