May 2, 2023

The Peter Voss Hypothesis: We Will Soon Need to Embrace AI to Be Effective in the World

In this episode of the Product Science Podcast, we cover career opportunities in AI development, the potential of AI to be personal and an assistant, and how embracing a future with AI means focusing on critical thinking skills.

The Peter Voss Hypothesis: We Will Soon Need to Embrace AI to Be Effective in the World
Written By:
Holly Hester-Reilly
Holly Hester-Reilly

Peter Voss is a Pioneer in AI who coined the term ‘Artificial General Intelligence’ and the CEO and Chief Scientist at For the past 15 years, Voss and his team at Aigo have been perfecting an industry disruptive, highly intelligent and hyper-personalized Chatbot, with a brain, for large enterprise customers.

In this episode of the Product Science Podcast, we cover career opportunities in AI development, the potential of AI to be personal and an assistant, and how embracing a future with AI means focusing on critical thinking skills.

Subscribe for the full episode on Apple, Google Play, Spotify, Stitcher, YouTube and more. Love what you hear? Leave us a review, it means a lot.

Resource Links

Follow Aigo.AI on Twitter

Follow Peter on LinkedIn

Follow Holly on Twitter

Follow Holly on LinkedIn

Questions we explore in this episode

How did Peter Voss and his team at build a hyper-personalized chatbot with a brain for large enterprise customers?

  • Peter Voss spent five years studying intelligence and understanding different aspects of it from how children learn to measuring intelligence.
  • He then started designing an artificial intelligence system with the goal of building one that can think, learn, and reason the way humans do.
  • Peter hired people with cognitive science, linguistics, and cognitive psychology backgrounds to teach the system to build a curriculum and tests to teach the system what it needs to know.
  • The company's approach is different from most work in AI, as it's based on cognition.

How should we be thinking about the role of AI in our lives in the next 5-10 years?

  • Knowledge workers will have personal assistants with seamless interactions that store their personal knowledge.
  • We will need to learn to embrace AI as a tool in order to be effective in the world.
  • The best educators will embrace AI because they have to prepare their students for using these tools effectively.

What does Voss describe as the stages of acceptance of AI in academia and other fields?

  1. You try to ban it.
  2. You very reluctantly, Implement it.
  3. It becomes mainstream.
  4. Embrace it.

Quotes from Peter Voss in this episode

Most work in AI is from an engineering perspective and more recently from a big data perspective, which means statistics play a big part in it, mathematics and statistics and not really at all cognition. The next wave is really to approach artificial intelligence from intelligence, to understanding intelligence and cognition.
I think anybody in business, irrespective, whether it's a startup or a well established business, you really just have to understand this technology, understand its strengths and weaknesses.
Ultimately I think that's what the best educators will do, is to embrace it because they have to prepare their students for using these tools effectively. If you want to be effective in the world, a few years from now, you have to be totally comfortable with these tools.

The Product Science Podcast Season 5 is brought to you by Productboard

Check out their eBook: 10 Dysfunctions of Product Management

If your product team is struggling, it could be one of the 10 dysfunctions of product management. Learn more about each dysfunction and how a product management system can help you address and avoid them.

Get Your Copy


Holly Hester-Reilly:Hi, and welcome to the Product Science Podcast where we're helping startup founders and product leaders build high-growth products, teams, and companies through real conversations with people who have tried it and aren't afraid to share lessons learned from their failures along the way. I'm your host, Holly Hesta, Riley founder and c e o of H two R product Science. This week on the Product Science Podcast, my guest is Peter Voss. Peter is a pioneer in AI who coined the term artificial general intelligence, and he's the C e O and chief scientist at igo ai. For the past 15 years, Voss and his team at igo have been perfecting an industry disruptive, highly intelligent, and hyperized chatbot with a brain for large enterprise customers.Welcome, Peter. So I would love to hear more about your journey to building Igo ai. Can you tell me a little bit about how that got started?

Peter Voss:Yes, it's a bit of a long story, but I'll try and keep it short. So I started out as an electronics engineer, started my own electronics company and then I fell in love with software. So my company turned to a software company. I developed a comprehensive E r P package and we were, ended up being quite successful in that and company grew to 400 people.We did an I P O. It's really when I exited the company that I had the time available to think about what do I really want to do? And the thing that struck me is that software really is quite dumb if the. Programmer doesn't think of something, it'll just give an error message or crash. There's no common sense in software as I think we've all experienced.So basically how can we solve that problem? That was really what I wanted to work on. So I spent five years just studying intelligence, all different aspect of intelligence, or what is intelligence, how do we measure it? How can we be certain of things? How do children learn? How does our intelligence differ from animal intelligence?So really deeply understanding intelligence. And then I started designing an AI system, artificial intelligence system that can think and learn and reason the way humans do. And of course that's a really big challenge. So been working on that for a long time. Alternating between being an r and D mode and commercializing the technology.So that's really how I got into Igo, ultimately to build the chat bot with a brain.

Holly Hester-Reilly:I'm curious about that period where you spent five years just learning about intelligence. How did you set goals for what you were learning? What did you do then?

Peter Voss:It was really just driven by fascination and interest. One book would lead to another book or to some research papers. I really started off with the foundations of epistemology theory of knowledge. How do we know anything? How can we be certain of things? What is reality? And things like that. And then from that, I really got a better understanding of.How we conceptualize and how we learn, and what makes human intelligence so unique.

Holly Hester-Reilly:is it that makes human intelligence so unique?

Peter Voss:It's the ability to basically form abstractions, high level abstractions, and we can learn abstractions on top of abstractions to have very abstract concepts like Freedom, Liberty, or honor and things like that. And that really requires high levels of abstraction.The other thing that I found in intelligence working, I spent a better part of the year working on developing a new type of IQ test that doesn't have the same cultural bias as traditional IQ tests have. And what I found there, I was working with the researcher in South Africa and she developed the theory behind it.I implemented the computerization of it and what I found through that metacognition. Us being able to think about thinking, reasoning about reasoning is the single most important indicator of high level intelligence. So people who are highly intelligent, highly competent, have good metacognition. They automatically.Tend to approach problems in the right way, not get stuck in the wrong way of thinking about things. So there were just a lot of different aspects and angles of intelligence. And of course I also spent a lot of time studying what else had already been done in the field of artificial intelligence of many decades that been

Holly Hester-Reilly:So how are we able to make such a statement? How are we able to say that metacognition is such an indicator of intelligence?

Peter Voss:So it's not really an intelligence test. It's called a cognitive process profile. So it measures the different dimensions of cognition in somebody. And the way it works, it's actually the person learns a new language and learns a new game. And we basically measure how the person goes about doing this.And there are about a hundred different dimensions of different aspects of how quickly people respond, how good they are using memory, whether they're good or generalizing easily. So there are about a hundred different dimensions, and one set of these dimensions is metacognition, is basically how do they manage the overall reasoning process.So by computerizing this and monitoring how people actually. Learned this language and played this game. We were able to extract that this one dimension correlated very well with just other measures of intelligence and general competence that people have across a wide range of

Holly Hester-Reilly:And how do we account for the cultural. A bias that you were saying that working with the person in Africa.

Peter Voss:Yes, so this work was done in South Africa where traditional IQ tests are a huge problem because the majority of the population doesn't have the Western background and these IQ tests. Really are based on assuming that you have that background. And so what made this test so powerful and not have a cultural bias or minimal cultural bias, I should say, is that it's a completely new game and a completely new language that you learned.So it leveled the playing field. And we had some, it was very interesting with some candidates for a very high level position in South Africa. Basically the governor of the Reserve Bank that they were trying to fill the slot and there were a whole lot of candidates that that came in with high qualifications, and they did very poorly on this because they had very rigid thinking.They weren't able to fluidly adjust. And then on the other hand, we could see that somebody was, Very little education. We had a lady serving tea in the office and she took the test and it was very quickly apparent that she was management material that just her way of grasping this game and this learning this new language very quickly, that she has just a lot of cognitive ability.

Holly Hester-Reilly:I guess I have to ask, because you just said this about management material, how does cognitive ability relate to management?

Peter Voss:Good manager has to be able to see the big picture, and that is an ability to not get stuck in the weeds, basically not to get. Stuck in solving individual problems, but it's the ability to assess how good the people are that you're managing, what their strengths and weaknesses are, and stand above the specific problems.So it's again, an important skill in management is apart from. People skills, which is obviously a different dimension from cognitive ability, that's more eq. But apart from that, a good manager absolutely has to be able to look at things at a more abstract level and see what are the high level patterns that are happening, where are things working and where aren't things working?And also to assess. The people that they're, they're managing or coordinating to really go beyond maybe particular strengths or weaknesses they have, but look overall at how well they fit into whatever the task is.

Holly Hester-Reilly:that makes sense. So I wanna focus. In our conversation a little bit, we've been talking about some general intelligence related things, but I know that a lot of our listeners are people in product and startups who maybe haven't had as much exposure to developing for AI as they would like, and there's a lot of things going on that make it exciting right now.And I'm curious if you can tell us more about. The actual development process, how did you get started with building a team that could build ai? How did you figure out who you even needed on a team and how to begin?

Peter Voss:So the approach that we are taking is very much based on cognition, on understanding intelligence, which is actually very different from most work in ai especially. Recently most work in AI is from an engineering perspective and more recently from a big data perspective, which means statistics play a big part in it, mathematics and statistics, and not at, really not at all cognition.So the different phases that AI has gone through the, in the earlier days of artificial intelligence, expert systems and things of that nature, were very much logic based. So there's logicians programmers who really built ai. And for example, you had a deep blue, the IBM's world chess champion, I think in the eighties, and that was very much an expert system and programming and logic involved.In the last 10 years or so, this has shifted to big data statistical systems. So it's statisticians and data scientists that have really been driving it. But again, not cognition, not people who really start from cognitive science or cognitive psychology. So the next wave is really cognitive architectures, is to approach artificial intelligence from intelligence to understanding intelligence and cognition.When I started putting together a team, I did certainly look for some good programmers to actually be able to do the programming, but mainly what I was looking for, people who have cognitive science, background linguistics, and cognitive psychology. And that's actually the biggest part of our company is not engineers, but what we call AI psychologists, people who understand language and thinking and cognition basically.So it's very different. Our approach and the kind of people we hire and. Quite frankly, we can't find people with experience in AI because cognitive approaches are just very rare. And if people who have worked on other cognitive approaches probably different, they'd have to unlearn too much. So we really just get smart people.Us usually just, you know, with one or two years of work experience, but to have a passion for cognition on our team, that's the most important part.

Holly Hester-Reilly:what do those people do once they join the team?

Peter Voss:To basically teach the system to build a curriculum and tests to teach our system what it needs to know. And so it's a matter of teaching the system in the right order. Now, they're really two different directions or paths in our company right now. The one is, More of an r and d path to continue developing the IQ of the system to make it smarter and smarter.And then the other part of our company is the commercial aspect. And there's quite a difference because on the IQ part, you want the system to be able to learn a lot of different things, to come up with different ideas and different responses. That may not always be correct because as you're developing the system, whereas on the commercial side, everything needs to be 100% correct.The way the chat bot interacts with people has to be signed off by legal, by marketing, by customer experience team. So you really need a very predictable system. You don't really want to have a system, as I know better how to sell something or how to solve a technical problem. It has to be okay. You do it by the book, the way you would train a call center agent.So for example, one of our big customers is 1-800-FLOWERS, and Harry and David are part of that group. And they engaged us because they wanted a hyper personalized concierge assistant that can learn who you buy gifts for, what occasion, what kind of gifts that different people like, and to be able to basically give you this very personalized service, but it also has to meet all of their business rules and others say, be signed off by legal and marketing and customer experience teams.

Holly Hester-Reilly:When you get a new client at Igo and you're building up like a personalized chatbot for them, what does that process look like? Do you have product managers involved?

Peter Voss:Yes. Yeah, a very good question. A big part of it is, of course, for us to understand each business, to understand the business roles. So there are a couple of different aspects that we need to do. One of them is building what's called an ontology, so that's really the particular products and services that they offer.Because each business has their own jargon and their own products, which the system has to be taught. And so you have to teach it. For example, what is in the category of birthday gifts or as opposed to funerals and so on. So building those ontologies, we need to understand the business process from the customer, and this is again, where our AI psychologists get involved with understanding the business and then helping to craft those ontologies, which.Often a big part of it's, we can get it from existing databases or websites or whatever, but we have to know what information to get and to put it into the brain. So that's one aspect. The second aspect that's invariably involved is business roles that we have to learn. What are the business roles? How do you respond to something if a product is due in one day, or what are the delivery schedules?How do you handle different types of. Issues. So that's another aspect of it that we have to understand and teach the system those business roles to handle them. The next aspect is integration. There's invariably integration with existing APIs to find the current status of an order or product availability or whatever it might be, FAQ set on the system.And so we then have to hook up and match these APIs with the brain. So we have to say, if it's called name last. In the api, it has to map in the brain to a person's last name and so on. So that integration has to be done as well. And then we basically iterate and tune the system to the way people actually respond to it, because there's always a gap between, even with the most experienced implementations that you do, you never quite know how people are going to respond depending on the demographic.How comfortable are they in talking to. And AI and what are the things they understand easily and what are the things they don't understand easily. So then we tune the system and that invariably improves the containment or the self-service rate

Holly Hester-Reilly:Now, is that one of the key things you're measuring? The self-service rate?

Peter Voss:Yes, absolutely. Self-service rate and customer satisfaction. Those are really the two things. There are other measures, like we also monitor how quickly people respond, how quickly the system responds, whether people have to repeat any prompts, because that would give an idea that maybe they had to think about it or they didn't quite understand it.But bottom line is really the self-service rate and customer satisfaction are the two big

Holly Hester-Reilly:And what kind of results do you typically see with that?

Peter Voss:Oh, we get around 90% self-service on things that are in scope. There are certain things that are not in scope where we will just, for example, we are not doing financial. If there's something wrong with your credit card or whatever, we'll just identify who the customer is, what the general issue is, and then we pass it on to a live agent.But we've already collected a lot of information that we can then transfer, so the interaction with the person and becomes also a lot more

Holly Hester-Reilly:That makes sense. So do you have a sense of. How similar or different it is to be working on developing Igo AI versus people who are assigned or who are engaged as product managers or founders in ai, but they're not on a cognitive system. Is the development process similar or are there significant differences because of the approach?

Peter Voss:Yeah, so if you ask to different aspects or the different kinds of ai, there're quite a number of different aspects. Already mentioned earlier, a lot of systems or the earlier systems used to be expert systems, and that used to have those around now where it's. Essentially some program that has been usually some kind of a flow charting program with a database that will go through some decision tree, and that could be at a chat bot as well.Like the things we all hate. When you go to a company and you have an automated system, you say, press one for this or two, or say yes or no, that are just simple flow chart and there's no cognition, but there's still a lot of that kind of work being done. In fact, Siri and Alexa, for example, really work on that principle.That once they've identified an intent, you could say, for example, blah, blah, blah weather. And then it'll give you the weather report. But you can also say, I hate Uber. Don't ever give me Uber again. And the chances are it'll trigger the Uber app and ask you, where do you wanna go? How many people are going?And do you want Uber X? So those are unfortunately a lot of the chat chatbots and automated systems just work on those principles. And then of course, what everybody's talking about right now are chat bots. Actually, before I go there, I'll just mention the other category that's. Categorization or statistical categorization, like sentiment analysis or targeted ads.And that's really the big story until we got to chat G P t now late last year. But prior to that it was all about machine learning, deep learning, and basically how can you use a lot of data to categorize things. And whether it was image recognition for autonomous driving or for whatever purpose or whether it was sentiment analysis or whether it was targeted advertising, but those are basically categorization systems where you give it a lot of training data that is tagged and then it is able to with new input.Basically tell you that, oh, this person probably wants this kind of product, or they're unhappy or whatever. But more recently now the focus, of course, is on a new kind of system, G P T, which is generative preed transformer models. And each of those letters actually are quite significant in describing this.So generative means based on some input they can generate, a lot of output can be a whole story, and you can say, write me a book about whatever. And in fact, if I said that, it'll probably come up and write a book about forever. Literally. It can generate text, it can generate lists that generate poems and so on, and, and it's really totally amazing of what it can generate.Now the P is pre-trained and that's important that it doesn't actually learn during the process of using it. It doesn't learn new information. It takes a context into account of what you said earlier within the session, but it doesn't learn anything. The model isn't updated from your interaction permanently, so it's pre-trained.And this, I think Sam Altman said, somebody asked him, did this cost hundred million to train G P T four? And he said no. A lot more. You can just see how massive that training process is, how expensive it is. It has hundreds of billions of words that have been fed into it, which is just mind boggling the amount that has been.So it's pre-trained. It doesn't really learn for that matter. Doesn't really reason. It really is just based on this massive amount statistical information that makes it able to do these really amazing. Things. And then T is just stands for transformers. And that's a technology that was invented in 2015, quite a while ago, seven, eight years ago.And that's turned out to be really powerful. It's a way of finding what are the most relevant words in a sentence to predict the next word. And it's just, um, a clever way of doing that. And that's G P t. And it's quite amazing what it can do, but it's also become very apparent that. It's not general intelligence.It doesn't really think about what it's saying. It doesn't have metacognition. It's a completely black box, so if it gives you the wrong result, there's really not a lot you can do other than trying to fiddle with different prompts, but the human always has to be in the loop. So it's really not appropriate for commercial grade applications where you can't tolerate error.Like I said earlier, if your legal or QA assist departments need to sign off on it, You really can't. It can be very unpredictable in what it says. If it's just a simple FAQ type of thing, sure, you could use it for that. People are used to having search give you wrong results, so if you have an faq, but anything where you actually updating APIs or you want an ongoing conversation that you can rely on, this technology is just not up

Holly Hester-Reilly:So how would someone who's in the early stages of trying to explore what chatbots mean for them and their career, how should they be thinking about what's happening now?

Peter Voss:When you're talking about a career, there are obviously a ton of opportunities with G P T type models or large language models, LLMs. And they are going to evolve in so many different directions. You now have the same core technology also being used to create images and to create music videos, and that's not gonna go away.It's going to continue going there. Are people more interested in the longer term of solving intelligence? Really need to look elsewhere, though. They really need to look more at cognitive architectures to take the approach that we've taken as to. First, understand what is really important in intelligence, but there are just so many opportunities in AI right now and that's just gonna

Holly Hester-Reilly:Yeah. One of the areas that I hear from people is, How do they get their first opportunity in it, or how do they find or make that opportunity? I guess I'm a really big believer in making your own opportunity to transition into something. So like how should somebody do that?

Peter Voss:Because these systems are so widely available and documentation, there are a lot of open source applications. It's really very easy to get into it. In the earlier days, it used to be people who just naturally get into programming, you know, kids that just love to program. When I discovered software and programming, I was like a kid in the candy store and it was like, this is just fantastic.I couldn't stop working with it. If you're enthusiastic about it, you basically can just learn to use G P T models effectively, or if you're more interested in vision or video, you can do it. Or if you're more interested in music, whatever field you're interested in. And it's so easy to find the stuff these days and really anywhere on social channels to do a search and find that.One of the hot areas right now that anybody can get into if they're interested enough, is what's called prompt engineering. And that is to basically to figure out. What exactly do you need to say to input into chat G P T or similar models to get the kind of results you want? Because often just rephrasing it can make all the difference in whether you get useful results or not.So that's at the moment, a real art that you can't go to university and study that. It's basically anybody who has the interest and patience to learn that, and you just have to be excited about it, to have the patience to. That's one area, but you could also make a fantastic career of that if you good at, that's worth a lot of money.So that's one area. The training, or what they call fine tuning models is another area where you take the base model of G P T and you'd give it additional training. You basically get an extra data set. So for example, for commercial applications, if you wanted to do an faq, you would collect all the data of what are the different ways people can say things if they.Have a question about, I dunno, delivery time or refunds or whatever might be on your FAQs. So you then collect a whole lot of different examples and you can actually use chatt PT to generate these examples. But then you feed them in and basically say, here are 50 different ways, or hundred different ways people may ask for a refund.And you basically build up the status set and you then do what's called fine tuning the model. And it then becomes a lot better at being able to answer those kinds of questions. That's another big area of. This big huge demand that almost every company wants to use chat G P T for those kinds of

Holly Hester-Reilly:Yeah. So those are some of the areas. So that you mentioned prompt engineering and then you mentioned fine tuning the models.

Peter Voss:Yeah.

Holly Hester-Reilly:about startup founders who are realizing that their startup needs to have an AI strategy? What sort of things do they need to be thinking about? Uh,

Peter Voss:I think the danger there is that people start up looking for funding that you'll just tag on the AI or just say, we are an AI company, but if you can't really differentiate yourself, is that really going to work? So there are lots of opportunities in ai, but I think the real thing is what market are you addressing?Where is your expertise? Un unless you're building tools, if you're quite expert at understanding the core technology, you may build a company around tools. Tools to develop, to take this technology and develop it for different applications. But an AI strategy, At the moment, I think anybody in business irrespective whether it's a startup or well established business, you really just have to understand this technology, understand its strengths and weaknesses.I think every business really, pretty much every business. Maybe there's some exceptions. Last week I was asked to give a talk about chat, G P T at University of Denver and education. It's like shaking their foundations because students can now ask chat G P T to write theirs essays or to do their tests for them, and how can they embrace, they have to embrace the technology that.Can't ban it. It's like trying to ban calculators in school or Google search or smartphones. And in hindsight, we can see how that sort of elevated education, but it's pretty scary right now for educators to say, how do we embrace this and how do we manage that? We have to understand the technology and if you set in your ways, you really don't want to have to unlearn everything and learn this completely new technology, which on top of it is evolving so incredibly

Holly Hester-Reilly:Yeah, it's interesting that you mentioned education because one of the things that I do is I teach at NYU's Business School, so I've been following how is their administration providing guidance to professors about what to do. And I think one of the things that's been really interesting is a lot of professors are having to rewrite.All of their assignments to try to create assignments that aren't necessarily trying to be something. You can't use chat g p T at all for, but at least something where the student is having to add their own thinking, having to interact with something that they have to bring of their own so that maybe they're using chat G P T to help them write a draft or finesse some writing.But in the case of my class, for example, they have to go out and interview potential customers. And so you have to go do that. AI can't go do that. Okay.

Peter Voss:Yes, as long as you have assignments that can't be gamed like that. But my partner did some research on this and came up with, forget where he found this, but there's four stages, like the seven stages of grief. They're like four stages for G P t, sort of acceptance in academia and I guess in other fields.But the first one is you try to ban it. So that's the first stage and it's like denial. And the second one is to very reluctantly, Implemented. And the third way is that it becomes mainstream. But the fourth level is really where you embrace it. And ultimately, I think that's what the best educators will do, is to embrace it because they have to prepare their students for using these tools effectively.If you wanna be effective in the world a few years from now, you have to be totally comfortable with these tools. And your students have to be prepared for it. So it's how can we use these tools and then focus on the sort of metacognition part, the reasoning, making sure that students still learn how to reason and to be critical, which is even more important because these models do confabulate or hallucinate or live and whatever.So it's really important to understand their strengths and weaknesses, to be very familiar and comfortable with them, but to still use your brain and to develop your brain and your critical thinking skills to know how best to use them and when to understand the

Holly Hester-Reilly:Yeah. One thing I wanna touch on there that you mentioned is AI is hallucinating or lying. What do you mean there? Tell me more.

Peter Voss:Yeah, so these models were trained with, uh, hundreds of billions of words of text that came from. Basically the internet, all different things like Twitter or whatever, social channels. So first of all, there is a lot of disinformation there. There's a lot of stuff that just isn't true. So the things that are true and the things that aren't true, basically all got mushed up in this big statistical model, this huge statistical model was trillions of parameters.So when you now ask you the question, it'll pick up pieces. It's not critical. It doesn't think about what is going to answer. It's just totally automatic. It predicts one word at a time. That's all it does, based on the statistics of what it has been trained on. So it can come up with completely bogus information.For example, one of the the fun things you can do is ask for a buyer on any person who has a presence on the internet. So example, I asked for a bio of myself and it attributed degrees to me that I don't have, and skills that I don't have. When I asked it, what papers I've written was quite happy to make up stuff completely the name of the paper and even the reference, even the URL reference, which was completely bogus.So it just knows how to create very plausible sounding responses. It's very good at getting the style, but the substance of it is just hit and miss. It may be based on true information it has, or it may just be based on stuff. It basically makes up.

Holly Hester-Reilly:Yeah, and I think that's, Something that's really interesting. I certainly had heard about it, but I think that some people find it really scary, and I guess I'm curious how you think about that.

Peter Voss:It's embracing the technology, it's not going away. So just understand that we've had. People lying to us a lot. We've had our governments lying to us a lot. Social media lies to you a lot. People should already hopefully be used to that. In a way, it increases the urgency for people to think critically. I can give you another funny example here.For example, you can ask it, what's the difference between a chicken egg and a camel egg? And it'll make up the camel egg is this color and the size. And you can say, how do you make an omelet with a camel egg? And it'll give you detailed instructions or tell you how many. Calories it has. So I think it's just to think critically.And it's a tool that doesn't always tell you the truth. In fact, it often doesn't tell you the truth. And I think if people get better at acknowledging that and learning to deal with that, I think that'll actually be a good thing. And perhaps also being more critical of what their tribe tells 'em or they're in group.That may not always be actually, that maybe the outgroup has the right answer for on a particular subject and not the ingroup.

Holly Hester-Reilly:Yeah, it would be great if people learn those skills and are able to start thinking critically and identifying some of these things. Certainly there are many educated people who have developed those skills, but I also have, I don't know about you, but I have days where I get. Reel down on the state of the state of education and what skills people are coming away with.

Peter Voss:Oh yes. It's an area that interests me a lot as well, and I think there are a lot of opportunities, but I think there's some positive signs in terms of more options being available for education, not just the well established state controlled. Education and just let people have more options than education.I think that's a great thing. Now, in terms of one part I didn't talk about is the work that we are doing on cognitive architecture and really building AI that can think and learn and reason that can reason properly, and where the AI can actually figure out whether something is likely to be true or not, or what the sources.We got the information that can validate it and can think about it. That's what we are working on and we, we can't wait to make these personal assistants, what we call a personal assistant available to help people think these things through where you can ask an AI, a reliable AI to help you think things through and to help you identify that.The reason we call it a personal assistant, we should really call it a personal assistant, is there are three different meanings of the word personal that are relevant here. The first one is personal, that you own it, it's yours. So it serves your agenda, not some mega corporation's agenda. So that's the first personal.The second personal is that it's hyperized to you, to your history, your knowledge, your preferences, and so to not just a demographic, it's you as an individual that it learns and the third person is. Privacy issue that it'll keep things private, that you want to keep private. And so we are working on that and we look forward that, that kinda product.Hopefully it'll be ours, there'll be other companies working on that as well, will really help people think better.

Holly Hester-Reilly:And so now that's making me think deeply. Are you familiar with. The concept, there's different words for it. The one that comes to mind for me is building a second brain. This author, Tiago Forte wrote about, but this external, I used to think of it as like my external hard drive. Like where am I storing all of my personal knowledge?

Peter Voss:Yeah.

Holly Hester-Reilly:Yes, exactly. My exo cortex. How do you see the evolution of AI and external knowledge management in terms of what is it gonna be like to be a knowledge worker even, I don't know, five to 10 years from now?

Peter Voss:Yes, at Exo Cortex, I mean, we have very primitive versions of it right now with our smartphones and especially people who use them a lot or computers. And will become more and more tool users, but with a personal assistant, it'll be so seamlessly integrated into your own thought processes and your own operation, whether we talk to it or there's some other kind of interface we have, so it'll become a lot easier to use.The tools, the tools that we use will be more seamlessly integrated into the way we inherently think and operate. Whereas at the moment a lot of tools we use are not really very human

Holly Hester-Reilly:Yeah, that makes me think of when you're at a doctor's office and the doctor, lets, I'm listening to you, but I have to type because this is how I take my notes and maybe once things. Are more integrated. That will seem a lot smoother.

Peter Voss:Absolutely.

Holly Hester-Reilly:pleasure talking to you today. I think we're coming up on the end.How can people find you if they want to learn more?

Peter Voss:Our website,, and we have links to various resources there. Also, I've written quite a few articles that are on under my name, Peter Va. I've written about ethics and free will and rationality and all sorts of different topics that have interested me over the years, futurism, life extension, and so on.You could also email me Peter Igo ai. So fairly easy to find.

Holly Hester-Reilly:Awesome. Thank you so much for your time today, Peter. It's been a pleasure talking to you.

Peter Voss:Thanks for having me. Yes, this was fun.

Holly Hester-Reilly:The Product Science Podcast is brought to you by H two R Product Science. We teach startup founders and product leaders how to use the product science method to discover the strongest product opportunities and lay the foundations for high growth products, teams, and businesses. Learn more at H two R product Enjoying this episode. Don't forget to subscribe so you don't miss next week's episode. I also encourage you to visit to sign up for more information and resources from me and our guests. If you like the show, a rating and review would be greatly appreciated. Thank you.

More Posts