Site icon Easy Prey Podcast

Don’t Miss Opportunities Created by AI with Howard Getson

“People are interested in things like AI and automation because they want to amplify intelligence. They want to go from guessing to knowing and continually improve performance.” - Howard Getson Click To Tweet

We are living in a world where AI can do many of the things people have been doing, thus creating an opportunity for us to create and do something new. Not understanding this transition can leave us unnecessarily exposed to risk. Today’s guest is Howard Getson. Howard is the President and CEO at Capitalogix. He runs an algorithm hedge fund and the data science company that powers it. Capitalogix created a revolutionary financial technology platform that uses adaptive AI to maximize performance with real time insights. His prior company, Intelligent Control which he founded in 1991, was an Inc. 500 company and won an IBM award for Best Business Application. Howard is currently on the advisory counsel over a bio ethics and research institute.

“As technology raises the level of what is possible, it frees the human up to the questions, ‘What can I do? What do I want to do?’ It allows you to do something better.” - Howard Getson Click To Tweet

Show Notes:

“The capability that is worth investing in is the one that has hidden energy that makes people go to stage 2. It’s not what it does but what it makes possible.” - Howard Getson Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Howard, thank you so much for coming on the Easy Prey Podcast today. 

Thanks for having me. I'm excited to be here. I know a whole bunch of people who have been on your podcast before. People like Chris Voss, Bob Cialdini. We're both friends with Stephan Spencer, so we have some interesting friends in common. 

Yep, some great guys and good episodes to listen to. Can you give a little bit of background about who you are and what you do?

My name is Howard Getson. I'm the CEO of a company called Capitalogix. If I was on an airplane, I would describe it differently because I'd want people not to pay attention and let me sleep, but I run an AI company that actually builds the AI that runs hedge funds. We also do other things and other joint ventures in different spaces like medicine and a Fitbit for pets. Some interesting different ways of leveraging AI or amplified intelligence to create results.

That's cool. How would you define amplified intelligence versus artificial intelligence? Is there a differentiation there?

I'm a Strategic Coach guy. I don't know if you know Dan Sullivan and Strategic Coach, but it's a terrific program. One of the concepts that he has is called the bigger future. It's about being able to think about an idea big enough that it's going to last 25 years. What are you willing to devote 25 years to? 

I realized as an entrepreneur that I keep focusing on technology. AI is a technology. Very few technologies last 25 years, but what somebody gets from it—the human nature side of it—is less likely to change. 

People are interested in things like AI or automation because they want to amplify intelligence. -Howard Getson Click To Tweet

In a sense, people are interested in things like AI or automation because they want to amplify intelligence. They want to go from guessing to knowing. They want to make better decisions, take smarter actions, and continually improve performance.

Amplified intelligence is something that, I believe, for the next 25 years will be a great compass heading because as more and better data, computers get faster and become quantum, I think for the next 25 years of my business career—I'm approaching 60—hopefully, that'll be an exciting full 25 years—I believe amplified intelligence is something that will cross many industry boundaries. 

It's something that is investable, meaning it's inevitable, but the consequences are uncertain. Even though we know it's going to happen, we don't know what it really means, so there's incredible volatility and opportunity in there for an entrepreneur to find places to add value.

The neat thing that I see about artificial intelligence, amplified intelligence, is what kind of value can it bring to our lives and how can it impact health and wellness in terms of are we working 90 hours a week versus less?

I like to go backward before you go forward. You look old enough that you probably remember the mid-90s when the Internet was about to be really big. There was a company run by Steve Case called AOL. AOL used to put floppy disks on outside covers of magazines and then later became CDs. I can't tell you how many AOL disks or CDs I had and you can probably still hear in your head, “You've got mail.” In the mid-90s, I knew the Internet was going to be big. Did you think the Internet was going to make it big? 

Yes, but not to the extent that it did.

Here's the thing. Even if you did, would you have bet money that CompUSA would go out of business?

No, they're foundational to who the Internet was.

Yeah, wouldn't you think that would be something that you would do more of, not less of? Or the Internet's going to get big so record stores are going to go away? What? Wait a minute, people are going to want to buy a $130,000 car on the Internet without actually driving it? What? The car company is going to send updates to the car like a transformer so it gets new features over that thing called the Internet. 

This is that thing where people are really good at understanding there's a turning point coming, but they're not really good at knowing how to time it or to understand the implications. This is why investing is so hard. 

On the other hand, think about three technologies. Not just for you. I'm going to ask you to answer me, but everybody on the podcast, this is a great thing to think about that is going to make you money if you do it. Think about three technologies that already exist. That means there's no uncertainty about can somebody build it? It exists. This technology is there. But you know that it's going to terraform an industry that you care about or know about in the next three to five years. I'll let you answer by just picking a couple of ideas off the top of your head.

I think one that's been proven and I think is potentially a bigger shift that people realize is remote work.

That's great. By the way, AI could be one, electric vehicles, the ability to read and write DNA or print organ tissue. Blockchain. Not necessarily cryptocurrency but blockchain. 

All right, here's the second question. What technology hasn't been fully invented but you think it's inevitable? Meaning it's not really is it going to be invented, but it's when will it really happen so that it's truly usable? 

I'll give you an example like quantum computers. I don't believe that there's really any question about whether that will be a thing. The question is when will it be a thing and then later what does it really mean? Think about a couple of emerging technologies that really could change everything in the industry. 

I think if you can have clean energy without waste. You can spend and exhaust as much energy as you want to, and it has little or no environmental impact.

I'm going to tell you generalized AI where AI is now in devices. Not only that, how about fully autonomous vehicles? That’s, in a sense, a version of autonomous AI. Think about this: if vehicles were truly autonomous—I’m talking about not just Teslas self-driving right now, I’m talking about a generation beyond this where they're really autonomous. They can deal with unknown, unseen situations, all sorts of stuff.

You know that there are going to be sensors in the cars that talk to each other. There's got to be sensors in the road, on the infrastructure, and the overpasses. There has to be some way for the system to know about all the components and parts of the system. The fact that I know that fully autonomous vehicles are coming means that I can have an investment thesis that says, “Wow, this whole new place is a giant greenfield.”

Another thing is who's going to own the data? By the way, think about how much data there will be. If there's going to be that much data, there's going to be another need. My real point here is that even though people knew the Internet was going to be big, they didn't think about the consequences. 

Even though people knew the Internet was going to be big, they didn't think about the consequences. -Howard Getson Click To Tweet

The other important thing they didn't think about is what role were they going to play? I have a picture of Cassius Clay who later was known as Muhammad Ali, and he's standing over Sonny Liston. It's a classic photo from Sports Illustrated. In a sense, it really represents a kind of the new world versus the old world. I use it in my office to tell a story. 

I say that this is the new world versus the old world. You had a classic brawler in Liston and then Cassius Clay—soon to be Muhammad Ali, floats like a butterfly, stings like a bee—was something totally different. You don't have to be in the new world or the old world. 

You don't see it in the photo, but there was a referee and the referee got paid during this fight to, in a sense, negotiate and keep things fair between the old world and the new world. You don't see it, but there was a promoter who didn't care who won. He cared about the gate and the fact that he could do another one of these, there were announcers that were broadcasting the fight, and there was the guy who took the picture.

As these things are happening, you don't have to be the person who invents the new technology. You don't have to be the person who's trampled by it. How is your unique ability, then all of a sudden finding opportunity in an imagined future that's likely? 

It's not just what's going to happen or when it's going to happen. It's really about what are you going to make it mean, what are you going to do, and how does that impact who you are, who you choose to be, and what you choose to stand for creating? 

It's not just what's going to happen or when it's going to happen. It's really about what are you going to make it mean. -Howard Getson Click To Tweet

Do you see particular industries or occupations, not to say in the way of upcoming technology, but are going to be steamrolled by AI?

It's funny because I was just interviewed in a documentary about AI and they thought they were asking me this really tough question. What happens when AI gets smarter than us?

That's always the question.

It's already smarter than us. It's not smarter than us at everything, but it's smarter than us about anything. Meaning any specific thing that I want it to do, I can make it do something but I can't make it do everything yet. 

Even in our system, if you saw some of what we do looks like magic, but almost every new technology looks like magic until you understand the science behind it. What you'll see is we have thousands, millions of little algorithms, techniques, and even our most impressive best thing isn't a human. It would be profoundly autistic. It's good at what it's good at, but it's not empathetic. It doesn't understand what the other bots are doing.

It's teaching specialist versus a generalist.

Even with specialists in our company, it's hard to get the children to play nicely with each other. How do you get hundreds or thousands of bots to understand what others are doing and then to recognize that some of them are not friendly and I can't trust them? I have to make them aware but then I have to say, “If somebody asks you a question, do you answer?” 

 

 

 

By the way—this is not a joke—my mother called me and said, “I think I made a mistake.” I said, “What do you mean?” She said, “I got this letter from PayPal and they said that somebody had scammed me. They needed my bank account information to check.” I said, “Mom, you didn’t.” She did. Not only that, she downloaded software that let them control the computer, and he was so nice he was willing to blank her screen so she didn't have to watch all this and be confused while it was happening. 

Way too common. 

The thing is that technology is getting good so fast. There's no question. 

I just had a granddaughter. I hope that she understands math, statistics, and data science because I think it's going to become increasingly important. Then as soon as I said it, by the time she's old enough to do it, what are the chances that a human is doing the basic coding? I think almost none. 

In fact, what are the chances that she's actually going to learn how to drive a car? The only reason to learn how to drive a car 20 years from now is going to be nostalgia and wouldn't it be cool to think that you could? How many times have I pushed a plowing sheers through a field? 

Humans think that we used to act like machines and now machines are starting to act like humans, but humans have always acted like humans. The reason you pushed a plowing shear through the field is nobody else was going to do it for you and you wanted to eat. 

As technology raises the level of what's possible, it frees the human to say, “Oh, what do I want to do? What should I do? What could I do? What would actually create the most value and impact?” It frees you to lift your chin and do something better.

As technology raises the level of what's possible, it frees the human to say, “Oh, what do I want to do?” -Howard Getson Click To Tweet

Back in the old days, the level of technology wasn't as impressive. Again, we think humans acted like beasts of burden or machines. No, that's what it meant to be human when that's all you could do. As the machines or the technology raises the level of possibility of what you could rely on to produce productive output, then it frees the human to start to imagine and do better things. 

As you get generalized AI or amplified intelligence, as you get fully autonomous vehicles, as you get digital omniscience, a mixed reality metaverse, the ability to have DNA computing and storage or quantum computing, green technologies, brain computer interfaces, lab-grown meat or organs, business in space, the world is going to change. But humans are still going to do work. We're going to continue to find things to do the same way that there are still farmers, they're just doing different things.

They’re running combines instead of running bowls and plows.

Yeah.

As a kind of a step back, you were talking about the AI being able to deal with the data that we're giving it, and, “Who should I trust?” and, “Who should I not trust?” Is that a problem that you see with future AI intentionally feeding bad data to the AI in order to poison the results?

Since the beginning of time, didn't Sun Tzu write about it in The Art of War? Didn't you just see it with all the bot farms in Russia trying to create misinformation? There’s garbage in-garbage out, nothing in-nothing out. The best way to confuse people is to get them to focus on the wrong thing. 

The best way to confuse people is to get them to focus on the wrong thing. -Howard Getson Click To Tweet

If you had an AI system and you wanted to scam somebody, how hard would it be to create automation to create a Reddit post, or a Facebook post, or an Instagram post, or a LinkedIn post that said, “Hey, I just made this much money with this. Take a look at the momentum. I just saw this indicator do this.”

Then all of a sudden, 14 other people said, “Yeah, me too. Plus, look at this.” All of a sudden, you're like, “Oh, my God, this sounds great.” By the way, you just described the GameStop phenomenon. The thing is that it's so easy for a technology to manipulate stuff like this.

Humans manipulate it like GameStop.

Believing that technology will not learn how to lie is silly. Technology is going to learn how to lie. Humans will use it to lie, but technology is going to realize as it looks at different strategies that lying is a strategy with a positive expectancy. 

What I say is it's almost like antivirus or antimalware. It used to be based on specific patterns that it was saying, “Oh, I found this variant.” They can't do that anymore. They have to look for certain behavior because there's an almost infinite way to do those things, so you have to look for other stuff.

I look at AI as signal versus noise. How do you recognize what is a signal and what is noise? Even if you think about the difference between you playing chess and a chess master playing chess. The truth is you both have a brain that weighs within a reasonably small number of standard deviations what the other person's brain weighs. You have a certain number of brain cells. The thing is that humans can focus on seven things plus or minus two at any point in time. 

If you look at the chessboard and you're saying something like, “The horse goes up to n over one,” well you just used your cycles. If the chess master is saying, “I can ignore that whole part of the board because this pawn right here is actually the key.” Even though they might not be smarter, they were able to use their resources in a much more targeted way.

Signal and noise is actually saying AI is incredibly inefficient when you're processing things that don't matter or that have already been discovered. Being able to figure out how to use the available cycles or for potential benefit and to actually be efficient, effective, or certain at getting it is very different than saying, “Hey, there's this new technology that creates all this possibility, but it's really hard to do.” Frankly, 60%-70% of the cost and difficulty in any AI problem, or challenge, or opportunity, is data. 

Frankly, 60%-70% of the cost and difficulty in any AI problem, or challenge, or opportunity, is data. -Howard Getson Click To Tweet

Actually, I'll just tell you at my own company, I've got dozens of PhD quantum rocket scientist types that are far smarter than me about stuff like this. We stopped doing the data, and you go, “Wait a minute, but you're an AI company.” The thing is that we're really good at AI insights, and you need really good data in order to have really good AI insights. The truth is it was way too costly just to be OK at those things. 

Instead, I wanted to hire somebody that was outstanding at this thing that's important. If I know it's important, I don't want to be mediocre. Mediocre is expensive, I want to be outstanding, I want to be better than outstanding, so I wasn't going to get there myself. 

By the way, that's actually a good reason to use AI. If you want AI to create new opportunities not to do what you did so you can do nothing, it's the lost opportunity cost that says I need AI to do what I used to do so that I'm free to imagine something better. I want to free humans to increase value. We'll find a way to then use an autonomous platform to scale the opportunity, but you have to understand what you want to scale before you do it. 

There's kind of a model in my head. It goes kind of like this: capabilities, prototypes, products, platforms. Sounds simple. But I don't try to predict technology. I try to understand human nature. 

I don't try to predict technology. I try to understand human nature. -Howard Getson Click To Tweet

A capability is like an individual LEGO. If you have one LEGO and it does something, the first thing is simple. A human is going to do something with a new capability to say, “Does it help me do what I already do, just better?” If it doesn't, you say, “Who cares?” In a sense, if I'm investing in technology, I think about what capabilities do people care about? If they don't care, why would I invest? You're not going to get anybody to spend time, money, energy, whatever.

There's a different kind of capability where if you give it to them, instead of it satisfying their need and they simply say, “OK, thanks.” But instead if they go, “Couldn't they do this? What about that?” The one capability that's worth investing in is the one that has hidden energy that makes people go to stage two, which is a prototype, which is where they say, “What could I do or what should I do?” It's not what it already does. It's what it makes possible. Make sense? 

Yes.

A capability is something that you're doing yourself and you're evaluating yourself. A prototype is typically where you and maybe a team is trying to accomplish something, but you're fitting two or three LEGOs together and you're going, “Oh, this is better.” As soon as you do that better, you want more. 

When Elon Musk first saw what would become the Tesla on a racing track, the engineers saying, “This is going to be an electric car,” he probably said something like, “But can it go 300 miles? Because if not, we're screwed.” 

You're always thinking about the constraint. It's not about technology, it's about human nature. If somebody gets this, what are they going to want? Each one of these stages is a 10X scaling opportunity. As an entrepreneur, I'm constantly thinking about the things that exist and the things that are likely to exist, which is where we started the conversation, and to say, “What capabilities are going to become products?” 

You're always thinking about the constraint. It's not about technology, it's about human nature. -Howard Getson Click To Tweet

A product is where somebody you don't know achieves the result you do know using the capability. That's what a product is. Again, a prototype is you and people you know doing something you know, but a product is people you don't know being able to do the thing you know. Then a platform is even bigger and that's where people you don't know are using this capability to do things you didn't anticipate. In a sense, it becomes the foundation that new stuff is built upon. 

When you think about Tesla, it started with that battery, but that battery is going to be a whole separate energy company. If you think about SpaceX, it looked like a rocket, but now it's a transportation company that brings things into space at 1/20th the cost of what it used to cost to do that so it's going to enable a whole new round of space-based business. 

Then you look at Starlink and you say, “So one will be the UPS or FedEx of the universe and the other is going to be AT&T or Verizon.” You'll realize that they're platform companies. The thing that used to be a product actually becomes the capability for an even bigger prototype product platform. It's recursive and iterative.

But once you start to figure out in your industry what platforms are likely to exist in the next five years or so, then you can start seeing which product companies are likely to be the ones responsible for the platform and then which component capabilities they are going to need to acquire in order to make that happen. 

All of a sudden, you have an investable thesis. Now, you do the same thing where you think about yourself in that space and you say, “If those things are happening, what do I want to do?” 

How is that going to change my life? What opportunities is that going to bring to me?

Yeah. For me, I've created what, to me, is a very commonsensical model. It's a little bit involved, but it basically says, “Look, with AI, the promise is exponential results.” We talked about the concept of amplified intelligence, but they're looking for exponential results. 

I think there are three key drivers that make that happen. One is you have to have a competitive advantage. The English word that I would substitute is more ways to win. If you only have one way to win, then you're hoping the future looks a lot like the past and if it doesn't, you're screwed.

If you only have one way to win, then you're hoping the future looks a lot like the past and if it doesn't, you're screwed. -Howard Getson Click To Tweet

More ways to win let you deal with a wider range and have a bigger playbook. But if you have more ways to win, you actually need strategic certainty. You need a way to measure what's filling your cup or draining it. Otherwise, you've only created a way to distract yourself. 

Remember we talked about signal and noise? As people start to create more ways to win, they're actually creating their own noise. They're creating things that are going to drain their cycles of money, focus, energy, and opportunity management. There has to be some way to intelligently direct resources into the areas that have the highest expectancy score.

But then you need the ability to continue to learn and grow because in a sense, anything you've solved for happened in the past and the world is actually happening forward. I call it compounding insights but it's learning and growing. Competitive advantage with compound insights is what produces amplified intelligence. That's how you go from guessing to knowing. 

Competitive advantage with compound insights is what produces amplified intelligence. -Howard Getson Click To Tweet

There are three verbs that do this. By the way, amplified intelligence is a noun. It's a thing. You're searching for a thing. But how do you do it? There are three main verbs: discover, challenge, and innovate. 

Discover is not the verb most people would think, but data science isn’t about creating a new truth—that’s often lying. It's about finding a profound truth that was hidden by noise. It actually makes sense. Discover is an important verb. If you're into amplified intelligence, you have to start to think of a functional component step about “What do I need to discover and how much am I willing to pay to know certain things?” 

Then, you need a second one called challenge. In scams, I hear something, “But is it real?” A challenge is, “Was it skillful or lucky? How likely is it to repeat? Was it a coincidence or did this cause that thing?” 

Innovate is saying, “And if I didn't find what I'm looking for, I might have to think about different ways to do it.” That's amplified intelligence. But in exponential results, you have a competitive advantage and also strategic certainty. That's what produces a platform. 

You think of a platform as the way you automate things. That's how you go from lucky to skillful instead of saying, “Oh, that thing worked. I actually have enough things going that I can allocate resources to the thing that's working.” 

There's no thing that always works but there's almost always something that does. I need something that's not just a platform. I need it to be an autonomous platform because the human only has seven things, plus or minus two to focus on, but I can have a computer focused on so many more things. 

The three verbs to build the platform—platform is a noun—are build, scale, and refine. In total quality management, this used to be plan, do, check, act. As you start to create something, as soon as it works, you want to make it bigger, faster, and more, and then it breaks. So you start to say, “Oh, I needed clean data. Oh, wait, I needed to update the data daily. No, it's actually every hour.” Then soon, you're in the part where you go, “No, I want this to be real-time.” 

There's plan, do, check, act, but it's build, scale, refine. That solves the problem you currently have, but as an entrepreneur, your business is actually, again, about the future so the third component or thing is you need a sustainable edge. That's how you go from situational to perpetual. 

The three verbs there are grow, adapt, and fortify. The reason is your company is growing. The customers are growing and their needs are growing. You're selling a different set of products, services, or offerings. You've got to adapt strategically and then you've got to fortify the stuff that you thought you knew because the truth is the thing that was the very best strategy 18 months ago might actually be hurting you now. 

It's true in employees all the time. The best employee in your company when you have 10 people might not even make the cut when you're at 50 or 100. It's not that they're a bad person. It's a bad fit for where you've grown to overcome. 

With AI, one of the problems is you have to start to have expiration dates for some of these things that you're most proud of because as you create more and more things that are happening automatically, everybody thinks AI is cool, but automatic stupidity is scary. Making mistakes at light speed is not good and having something that you know has worked start to lose money is scary if you don't know it's losing money. 

The thing is that you know it worked, but knowing it worked is different than knowing it's working and useful to have in the process. You have so many things that are automated, but how do you keep track of all the things that are happening when it's no longer a human? It's hard enough to do when it's you and you're like, “Oh, I forgot.” But if it's your team, it's like, “Well, you ought to have reporting systems.” 

Well, guess what? As you start to build more and more integrated AI to do all sorts of different stuff, there's going to have to be a whole different way to audit unintended consequences because AI is a great antidote to fear, greed, and discretionary mistakes. But this is worse. This is an unforced error that you intended. You literally decided to turn this on and leave it running. Business is going to change, but it's going to continue to rely on humans for a long time. 

It's almost like you need AI to oversee and manage the AI.

That's exactly what you do. You think you're joking, but that's literally my business. We build layers of AI and we even use words like worker, manager, or supervisor. At the top, you almost have an orchestra conductor who's saying, “Oh, I want the woodwinds. I need the percussion. Now, everybody, play softly. Now, everyone, loud. Now, just the flute solo.” 

That all makes sense when you're looking at AI as a specialist. You've got a bunch of worker AIs being specialists and then you have a supervisor AI who doesn't know how to do the work but knows how to manage the specialist to make sure they're producing or performing in a way that makes sense. 

If the people on your podcast wanted a PDF that showed those models that I talked about, they just have to text AI to (972) 992-1100 and we will give them a little presentation and PDFs that show the methodology and form. 

Awesome. I'll make sure to include that in the show notes. The discretion of AI just leaves me with the sense of, “Oh, there is just so much going on underneath, behind the scenes, around the corner, just ahead.”

But it also means that there are a lot of people who have no clue. They're really well-intended but they're tricking themselves. They're not lying to you, they're lying to them. It's because they found out about GPT-3. They're like, “Oh my God, look what this makes possible. I can do this.” They think that they're an AI entrepreneur. What they're doing is they're surfing a wave, but they're not Mother Nature. They didn't create the wave and they're not a skillful sailor. 

I think everybody is going to be able to leverage AI over the next 25 years. It's inevitable. Even if you don't know how. If you get your electrocardiogram done, AI is going to be reading it and sending alerts to the doctor. So many places AI is going to make your life better, you won't notice. 

I think everybody is going to be able to leverage AI over the next 25 years. It's inevitable. Even if you don't know how. -Howard Getson Click To Tweet

But on the other hand, if you're an entrepreneur or an investor, all these existing and emerging technologies present promises and perils. What's important is not to focus on the peril.

My mother watches CNN and every time I talk to her, she's like, “Oh my God, this is so terrible.” My response was, “Stop watching.” If she watched and instead she's like, “Oh, I feel so good because look how the world is growing and we've never had so many people with so much visibility,” I'd feel differently. But they're using it to feel bad. 

As you move to crypto creating authenticated providence and governance and you say, “Oh, even more regulations.” What you make it mean, what you choose to focus on, and what you do, are things that you get to do, but you are in an exponential world that is getting better at an exponential rate. You can celebrate that it's never been easier to get faster or you could be frustrated that it's never been easier to get more bad data and be more confused faster. What you choose to do is up to you.

It's a perspective. I think that is a great spot to leave things. What are we going to do with what we have? What's our perspective? Where are we going to look forward to taking us?

Exactly. The way to go is to text AI to (972) 992-1100. I'll tell you what, for your podcast people, every Friday, I end up sending a list of 20 articles I found most interesting during the week. It's typically a mix of business, fun, science, tech, or just human interests, but that's where I would look. I spend hours and hours every week as a futurist thinking about these things and sharing what I think is interesting. I do it because I'm looking for potential collaborators who say, “Oh, I'd love to use that AI platform to make what we do better.” I'm long-term greedy, not short-term greedy, but I would love to talk to people who love the future and are trying to figure out what it means and what they can do. 

Awesome. Aside from texting AI to (972) 992-1100—I think I got that right.

Our website is capitalogix.com. There's a blog, but frankly, our website is bad. It's designed so that people know we exist, but I don't want people to know what we really do. 

I profoundly understand exactly what you mean by that.

We invent AI and we figure out how to use it to build cool things based on these models that we just talked about.

That's really cool. Howard, thank you so much for coming on the Easy Prey Podcast today. 

Thanks; I enjoyed it. 

 

Exit mobile version