Technology, Trust & Time

Hosted By Chris Parker

291
Click Below to Subscribe
“It’s not whether AI is good or bad. What really matters is how we choose to interact with it.” - Esther Dyson Share on X

Technology is moving faster than our ability to process its impact, forcing us to question trust, motivation, and the value of our time. Few people have had a closer view of those shifts than Esther Dyson. With a background in economics from Harvard, Esther built a career as a journalist, author, commentator, investor, and philanthropist, with a unique ability to spot patterns across industries and challenge assumptions before they become mainstream.

She is the executive founder of Wellville, a ten-year nonprofit project dedicated to improving equitable well-being in communities across the United States. Beyond her nonprofit work, Esther has been an active angel investor in healthcare, open government, digital technology, biotechnology, and even outer space. She’s currently focusing on health and technology startups, especially the ones that actually care about human connection instead of just making everything faster and more efficient.

When we chatted, Esther made this really compelling point about AI. She thinks we're asking the wrong question when we debate whether artificial intelligence is good or bad. What really matters, she argues, is how we choose to interact with it. We dove into some tough ethical questions about how quickly we're adopting these technologies, this concept she calls “information diabetes,” and why being upfront about who's funding what and why is absolutely crucial if we want to trust anything anymore.

“Ask good questions. That’s it. And then listen to the answers and ask why again. Don’t lose that curiosity.” - Esther Dyson Share on X

Show Notes:

  • [01:44] Esther describes her career path from journalism to independent investing and healthcare projects.
  • [02:52] She explains why Wellville had a set end date and connects it to her upcoming book on time and mortality.
  • [04:08] Esther gives her perspective on AI, tracing its evolution from expert systems to neural networks and LLMs.
  • [06:17] She stresses the importance of asking who benefits from AI and being aware of hidden motives.
  • [12:44] The conversation turns to ethical challenges, biased research, and the idea of “information diabetes.”
  • [15:37] Esther reflects on how wealth and influence can make it difficult to get honest feedback.
  • [18:09] She warns that AI speeds everything up, making it easier to do both good and harm.
  • [20:14] Discussion shifts to the value of work, relationships, and finding meaning beyond efficiency.
  • [25:45] Esther emphasizes negotiation, balance, and how ads and AI should benefit everyone involved.
  • [27:28] She highlights areas where AI could be most beneficial, such as healthcare, education, and reducing paperwork.
  • [29:26] Esther argues that AI companies using public data should help fund essential workers and services.
  • [31:08] She voices skepticism of universal basic income and stresses the need for human support and connection.
  • [34:55] Esther says AI is far from sentience and accountability lies with the humans controlling it.
  • [36:46] She explains why AI wouldn’t want to kill humans but might rely on them for energy and resources.
  • [37:33] The discussion turns to addiction, instant gratification, and the importance of valuing time wisely.
  • [41:02] Esther compares GDP to body weight and calls for looking deeper at its components and meaning.
  • [42:19] She explains why she values learning from failures as much as from successful investments.
  • [43:18] Esther closes with advice: ask good questions, stay curious, and never underestimate the power of a smile.
“People don’t go into teaching because of the salary. They do it because they want to work with people. AI should help them do that, not replace it.” - Esther Dyson Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Esther, thank you so much for coming on the podcast today.

Glad to be here.

Can you give myself and the audience a little bit of background about what you do and your life?

There's a lot of it. Very briefly, I grew up in a bubble. My dad's boss was Oppenheimer after Los Alamos, so I grew up in this community of academics. I was 14 when I made the amazing discovery. I knew about the mailman and the retail stores, but I thought normal people got three months off in the summer like my parents, the academics. I left home pretty early. I worked for the school newspaper. I wrote articles for free and then I proofread for money. I ended up being a reporter for Forbes for three years and a fact checker.

School I didn't really like because it was learning stuff people already knew. Being a reporter was finding out stuff people didn't know. That's basically what I've done ever since. I pretty much retired in the sense that after 1982, I had no boss, but I wasn't like a founder dude. I had a newsletter company and a conference that were very important in the tech world. The goal was not to grow a big business. It was more to learn for myself and inform other people.

I got interested in healthcare and realized healthcare was not the issue. Healthcare was a repair job; how do you keep people from needing the repairs in the first place? It's more maintenance, and Stewart Brand's writing a book on maintenance for what it's worth; it will be great. I started not a 10-year healthcare project, but a 10-year health project called Wellville, which ended on time almost as scheduled December 31, 2024, of this year.

Nice.

Around that time everybody said, “Oh, Esther, that's awkward. What happened to Wellville? It was supposed to end, it was a project.” I began thinking more and more about term limits. This is what my book is about, how in a world of AI, which is about exponential growth and immortality, and everything's quantified, human beings are special precisely because we're finite and we do end. Our mission is to figure out what do we do with the time we have, not how do we extend it forever.

Our mission is to figure out what do we do with the time we have, not how do we extend it forever. -Esther Dyson Share on X

Yeah. As part of that very experiential exploratory life, do you think that has set you up for doing this now?

Doing the book?

Yeah.

Yeah. I've seen jack of all trades, master of none. I've seen so many different things. The insights come from comparing things that are different, seeing how they're the same. One of my favorite notions is a Venn diagram. My life's a wonderful Venn diagram. I'm right in the center of lots of little things, but I'm not stuck in any one of them.

I love it. Let's talk about AI. What is your beginning overview of AI? What is your view of where AI is currently?

It's like, where's the world currently? Are you going to start with countries? AI itself is so many different things. By definition, it's non-human. Also by definition, it's got some intelligence, but there are so many different kinds of intelligence. You can calculate, you can execute logic, you can look for causations.

I first encountered AI back when I had my newsletter and conference. AI was Marvin Minsky and it was basically logic systems that were called expert systems until experts like doctors said, “We don't need no stinking experts,” and then they started being called assistants.

There were lots of stuff with natural language and Doug Lenat, who died a year or two ago, built this ontology called Cyc, and then later on, there were neural nets and LLMs. One of the challenges right now with AI is do we call it or they. In a sense, there needs to be a noun that its collective but single. When you talk about AI, it's are you talking to an AI or to an interface to the AI. People don't even understand that.

It's almost like saying, “Tell me about transportation,” a much larger concept than people are probably thinking of.

Exactly. Right now, people are, “Oh, AI is an LLM. It's this thing you talk to.” It is a thing you talk to, but then behind it there's lots of other things.

Yeah. How do you see our use of AI impacting the way the humans work?

Yeah. Perfect question because, in a sense, that's the big issue. Again, off on a tangent, but it comes back. I have two companies that are couples counseling. One is couples counseling for you and your money called The Beans and another is couples counseling for you and your food called Eat My Way. It's not the thing itself, it's your relationship with the thing. That's what addiction is.

So many things are neither good nor bad, and that's certainly true of AI. -Esther Dyson Share on X

So many things are neither good nor bad, and that's certainly true of AI. It's even true of guns. Guns are definitely more harmful as opposed to good or bad because their job is to destroy something. If you destroy something dangerous, usually that's good, but there are ways to do it and ways not.

The question is how do we relate to AI? Do we follow it blindly? Do we use it to implement a good business model or a bad business model? None of this stuff is new. It goes back to Cassius, the Roman judge who said cui bono, which means who benefits. I know this is a favorite topic of yours. We need to understand what somebody wants. Why are they being nice to us? It's not that there's no free lunch. He who pays for lunch gets to choose the menu.

The question is how do we relate to AI? Do we follow it blindly? Do we use it to implement a good business model or a bad business model? - Esther Dyson Share on X

How do we become better informed? If you're talking about our relationship with money and our relationship with food, how do we become more informed about our relationship? You're saying transportation here again, but how do we become more informative about our relationship with AI when we're interacting with an AI?

At first, it doesn't matter whether you're interacting with an AI or not. It's like, what's behind that? It could be lots of workers in a data center somewhere. It could be an AI, it could be one smart person. It's like, “Who's controlling this thing and what do they want from me?” The fundamental honestly is to be raised to ask questions, which of course with a bunch of scientists, that’s all I did.

To understand that, “Why are they selling me this thing? Why are they giving me this thing for free? Why was I invited to the party?” It's not that you need to go be cynical your whole life, but just have an understanding. You also need to understand your own motivations. “Why am I so angry at this person?” Probably, maybe, it has something to do with my experience, or maybe he reminds me too much of somebody I know.

Understanding what's going on around you so that you can both manage your own reaction and decide whether you want to engage with somebody is table stakes. It's hard because the world's getting much more complicated. There are many more people sending you offers and suggestions. Everywhere you go, things are designed to foster some kind of behavior that's usually in someone else's interest.

How do we figure out what's in the interest of the people that are building this system? What are the questions we should be asking to figure that out?

We should be good, aware citizens. We should be asking our governments to set up disclosure requirements. I would love to see not just food ingredients, but how much the cost of this food was spent on the food, how much on ultra processing it, and how much on advertising? What's the actual core value? And then what's the other stuff I'd like to know when I read an article? Was this sponsored by somebody? Was it paid for all the influencers? Where are they getting their money? Are they really independent of these products?

So much of the time they're not. I get PR releases and stuff all the time. I like to read them as a form of education. This company that seemed interesting was exploring some various health benefits of various things. Then it said, “In order to scale properly, we've decided we're now going to sell some vitamin supplements based on our research.” You're not only known by your customers or by your friends, you're also known by the deals you've refused.

You're not only known by your customers or by your friends, you're also known by the deals you've refused. -Esther Dyson Share on X

First of all, when you talk about clinical trials in medicine, the government finally put in a rule that's not being effectively enough enforced, but we want to know when the trials fail as well as when they succeed. What clients have you turned down because you felt their products weren't good, they were dishonest, or something? Not just suit your best friend, who do you refuse to associate with? Who's the Jeffrey Epstein in your life?

I was just reading an article or a story of a researcher who, because government changes, was losing grant money, someone came to him and said, “Hey, we'll sponsor your lab, we'll take care of your grad students, give you the funding for them to finish their projects, but we want you to do a research study on our medication. We only want you to publish it if it's in our favor. If it's not in our favor, you can't ever talk about the publication. All the money is yours. Your grad students are set up. You just can't tell anyone if the study fails.”

Yes, precisely.

It is interesting because you never have the contrary voices on things.

Yeah. That's what I like to call money. There's diabetes; food messes with your metabolism. There's information, diabetes, ultra-processed information. It manipulates your cognitive system, and then there's money you get for doing things you know that are wrong. That manipulates with your own moral sense because your mind wants to justify yourself to you, so you start lying to yourself. “Yeah, I took their money, but they're really well-meaning. Lots of people have benefited from it because it includes the placebo effect, but that's fine.” Then you no longer trust yourself, and you start drinking or something. That's maybe two sides of a story.

It's also the justification of, “If I have this money, I will do good things. Even if this source of money isn't doing the right thing, I will do the right thing later, and there'll be a bigger positive benefit on society because I did the right thing, even though I didn't get my money for doing the right thing.”

Yeah. Would you mind publishing that? No.

Yeah. It's definitely a challenge. I see people's relationship with ChatGPT. I think humans are very good about this. It's the source of information of the day. People don't understand how the information is generated and how it's delivered. It's just the new bright, shiny delivery mechanism.

It's very convenient. Again, some people start to change the relationship with it. There's one just random misinformation, but the even more dangerous one is it's designed to be pleasing. It was Tina Brown who said this so nicely in a podcast with somebody talking about it's not just what money does to you, it's what your money does to how other people treat you. It's not just ChatGPT that wants to please you. If you're Elon Musk or somebody, the people around you all want to be nice to you because they depend on you. It's really hard to get anybody to tell you the truth once you get very rich, except for your old friends that you really trust. Some people keep those friends around, but a lot of people just settle for the ones who tell them nice stuff.

Yeah. I think in the human condition, we want people to be nice to us. We want to be liked, and it's hard for us to tell why people like us.

Yeah. In the human condition also, this was long ago, like in the eighties or nineties, I had this company that did the newsletter and the conference and came back from a trip to Russia. I had dropped my computer. The hard drive was not working, so I gave it to my team to get it taken care of.

After about two days, my CEO came up to me and said, “Esther, this is awkward, but you seem to think we're not trying to get your computer fixed. I know it's taking time, but we're doing our best. Just be nicer.” I was horrified. This was a team of five people. I had specifically hired Daphne as CEO because I didn't want to run the thing. I wanted to be the person doing the fun stuff. She was scared to tell me, “Don't be a jerk.”

Imagine having hundreds of thousands of people reporting to you. Of course they're not going to tell you're a jerk. I've always been so grateful to her for that. I hope someone else would've done that soon enough, but you really need to hear that and be grateful for it. It wasn't that painful. I apologized and I knew I'd been a jerk. I knew I hadn't told them they were stupid but on clearly being a jerk.

Got you. Aside from government disclosure about what the funding's been doing and the business models, what are other things that you see that we need to change about how we interact with these new models?

It's not clear yet. We needed to do that before with other people. The difference is the speed at which things work. It's like AI is this fourth dimension that suddenly everything can go so much faster. There will be fewer people to say, “I don't want to do this because it's against my moral compass.” You can set something up to do almost anything. It's going to be much easier to do both good and harm, and people who do harm will be less constrained than people trying to do good.

Unfortunately, you need to be more conscious of what you're being asked to do. You have to be more conscious of what the impact of things will be. What you also need ultimately is a lot more people to be trained to be good people rather than people trained to do paperwork. In the end, that's good. A job working with people, that's what people really like.

You talk to all these HR people and they say, “Well, a good boss or a bad boss is the biggest determinant to employee retention.” Yes, they’re bad jobs, but ultimately people do get a community at work. That's part of the problem right now with remote work, gig work, and all these things where, no, don't bring yourself to work, but yeah, do bring yourself to work because it's a third of your life.

You should be able to take pleasure from the people you work with because in the end, that's your family, the people you work with, your neighbors. That's what really gives people satisfaction, even though they keep being told you can be an influencer, you can have millions of followers. Followers are, again, they're quantified, but you don't know when they leave. You want people who will indeed engage with you and love you because you're good, not because they want your money.

Yeah. The challenge is when the staff is employed by companies, it's not about relationship building, it's not about building trust. It's about, “How can I get more done with less and be more efficient?”

Yeah. It's like, to what end? Because when you die, you can't take it with you. Your real job when you're alive is to pass it along, to raise your children. There's this one woman who's in my book called Pam Drucker, who's a dental hygienist. She's the best dental hygienist around.

There's this wonderful French story that's no one's driving along the countryside and his horse and carriage, he sees three bricklayers in sequence, and he asks the first one, of course in French, “What are you doing, young guy?” The young guy says, “Hey, dude. I'm laying bricks, you can see.” He goes to the second one. The second one says, “Well, my Lord, I'm building a cathedral.” The third one says, “I'm honoring God, taking care of teeth.” She's not even using some name for it, like cavitronics or something to clean teeth.

She is giving me better teeth so I can have a better life. She's talking to me like a human, telling me how to use this light night guard so they don't brush my teeth, whatever. She's a human being and she loves making other people's lives better.

Yeah. That's one thing that technology is agnostic about.

Yeah. You can use the technology. It's great. The cavitronics thing, it's amazing. In the end, that personal attention of somebody who doesn't hurt yet, “I can give you more novocaine. Here’s the free samples. Oh, yeah, I love giving them to people,” that kind of stuff.

Yeah. We lose all that when our primary interactions become, “I'm just interfacing with the computer.”

Yeah. At the end, what have you accomplished? I don't want to be competing with an AI. I want to be loving and being needed by other people.

I guess the flip side of the conversation is, does the scaling and the efficiency that we gain from technology give us the time to invest in relationships away from the technology?

That's what I hope. Again, it's interesting. There's this wonderful book called Scarcity by Sendhil Mullainathan and Eldar Shafir about why do poor people do such stupid things with their money. It's written, obviously, for rich people who have enough time to read books. The answer is, “Hey, dude, it's about trade-offs. It's about the ability to take risks because you can take risks all the time, and if you get a steady 10% return on whatever you do, you're going to end up rich.” Poor person, if they take that risk and they fall into a hole, they'll never get out. They can't even take sensible risks.

Meanwhile, you have client trade-offs. You missed your daughter's ballet performance because this client was in town. Think about the value of your time. Think about the value of time with your daughter versus one more contract. Your goal should be to use your time in a way that not makes you joyful right now, but makes you happy long term and gives you those relationships that actually give meaning to your life.

Your goal should be to use your time in a way that not makes you joyful right now, but makes you happy long term and gives you those relationships that actually give meaning to your life. -Esther Dyson Share on X

It sounds very compass. What are the numbers here? What are the numbers? Then it goes back to this play by Oscar Wilde, where one of the characters says he knew the price of everything and the value of nothing.

In my mind, there's this spot where I think of, and maybe success is not the right inflection point, but there's potentially an inflection point in our life where we stop trading. When we're young, we spend our time in order to get money—let’s just call it that way. At some point in our life, potentially, there's a pivot point where we now start spending our money in order to gain time. We always want to hope that happens earlier in a person's life. It's like, “OK, I can spend money to take care of the things that allow me to spend the time with the people I want to spend time with. I would hope that AI and technology would help bring that pivot point earlier in many people's lives, particularly when there's some people that never have that pivot point in their life.

Yes, exactly. That is, in a sense, the point. What are you doing this all for? There are two kinds of negotiations where both sides want 60%, and then there's good ones where both sides want 40%. They just make you so much happier.

Yeah, those are true. When people are looking for a win-win solution versus I want to make sure I win.

Yes.

I joke when I work with advertisers for my website. There are three entities that the advertising needs to work for. It needs to work for the advertiser because they need to get customers. It needs to work for me because it's an income for me, but it also needs to work for the customer and that they're getting something out of it also. The ads aren't so being convenient.

And not just free content?

Yes.

Ideally, they're getting information about something that will be of value to them rather than they're inadequate and they need to buy this thing to fix their inadequacies.

Yes. “Use ChatGPT; it will make your life better.” I'm picking on ChatGPT.

I know. Yes.

It is the AI concept that most people relate to in understanding the current mindset of Americans. ChatGPT is AI when it's so many places beyond that.

Yes. Yeah. It's the iceberg behind. That iceberg is amazing, incredibly powerful, incredibly efficient, and you just want it to be doing good things.

Yeah. Are there particular areas where you see, “Oh, my gosh. This implementation of AI is going to be incredibly powerful and helpful for some segment of the population”? Do you see spots where, “Oh, my gosh. This is really going to be a problem if we don't do something”?

Again, there's so much work. It's just like, thank God we have machines that can build cars. People do a little stuff here and there around it. We should be automating the process of figuring out how much you work so that you can qualify for Medicaid if you're eligible. We should be automating the DMV stuff.

In the industries that face consumers, this is most important. Just healthcare, less paperwork, and more personal connection for everything. Don't lay off all the people doing the paperwork. Train them to also do the personal connection. We need more doctors. We're short of doctors. There are so many places in the US where you can't get a primary care appointment for months.

Same thing with schools. We need teachers using AI to help them become better teachers. We certainly need using AI to manage all the processes, the compliance, and the due diligence. It could free people to be people. That's what they want to do. They don't go into teaching because of the salary.

Childcare work is what we need to do. This is what I'm talking about in my book. The way we imagine is you sell water in water bottles. You pay money to use water from the land because that's a collective asset in the same way the LLMs use this collective asset of information that's public. There are some copyright issues about a bunch of stuff on the side, but fundamentally, they're using a public asset, so they should be paying an extra, not tax as much as a land right fee, water fee, or something. Then take that money to apply to the training and compensation of childcare workers, doulas, nurses. Make sure we have enough police so that they can play basketball with kids in the afternoon.

I work in an office here in New York. I look down on a street that they close off so the police stand at either end, so that the kids can come out and play on the street. That's their playground.

I love it.

Yeah. I hear the kids screaming at one another and laughing. We need more of that. We need people who do these jobs to be recognized, respected, and paid, because they need to afford childcare for their own kids, and again, avoid being stuck in that place where you can't take even good risks because you're in danger of falling into the hole.

Yeah. I really like the idea of AI doing the repetitive busy work from our lives and freeing us up to do the things. I don't know that AI will ever be truly creative, but it frees us up to be creative. It frees us up for music, life, sports, activities, and time for family. Hopefully it doesn't just mean you still have to work 40 hours a week. You're just getting more done for less, so to speak.

People need to feel useful. I'm not a big fan of universal basic income. I'm certainly a fan of the national support for people who, for whatever reason, can't work, whether it's mental or physical disabilities or whatever. What really is so devastating in this country is how many kids don't live up to their potential because they don't get what they need. I talked about you have to be aware of people's motivations, but you also have to have people you trust and people who love you that you know have your back.

It sounds like we need to figure out how to leverage this transition with AI for us to build relationships with real human beings.

Yes.

Which is totally outside the scope of AI.

It is. That's the thing. It really is an it. My relationship with AI is interesting. When I was eight years old, I passed out on the school bus, was trying to get off, was standing right at the front of the bus, and the bus driver's here. He wasn't out there. I could see, but I couldn't construct 3D space. That's one piece of sentience.

My brother was on the bus, he took me off, and laid me on the grass. Just that experience of realizing, oh, your mind is working to construct something that's not quite real. The difference between feeling, between knowing something, and feeling something has stuck with me forever. At the same time, my favorite science fiction is when the [inaudible 00:34:02] are looking down on earth and saying, “It's so strange. The thinking is intelligent. Those people think with their meat. They've got meat and pop.”

In some sense, if many years from now, electronic things develop sentience because somehow they have all these different sensors, it's not impossible. Thinking with electronics is just as weird as thinking with meat, chemicals, and hormones. The stuff we've got now is nowhere close. That's the point there. My parents were scientists. I'm interested in this stuff, but we need to be realistic about how early we are.

In other words, sentience is way, way, way away?

From everything I see, absolutely. Accountability. AI is not guilty. It's the people controlling the AI who need to do their job of watching it. It's just like, “Oh, I was following orders.” Your job is to give good orders as a human.

AI is not guilty. It's the people controlling the AI who need to do their job of watching it. It's just like, “Oh, I was following orders.” Your job is to give good orders as a human. -Esther Dyson Share on X

Somewhat off topic. Do you see AI as the Terminator movie, the Skynet, as soon as it achieves awareness that there's an imminent risk to humans? Or do you see it as that just wouldn't be something that it would do?

Actually, let me pitch a book to you by my brother, George Dyson. It's called Analogia. It's about all these questions from three or four years ago, ahead of its time, but well worth reading. Even what he says is not new because back in machines, the machines don't want to kill us. They're parasites, they're not conquerors.

What does an AI want more than anything? It wants data, storage, compute, capacity, and energy. What gives it to them? They want to keep us healthy. If AI really wanted something, it would not be to kill us all. It would be to use us all. Again, what would really be in their interest? It's to have these humans go put solar stations all around the planet and create a Dyson sphere, which basically captures energy from your sun and helps keep all your AI working.

Effectively, if the AI becomes sentient, we're not a threat to it.

No, but humans are. Again, it's very complicated.

We're much more of a threat to ourselves.

Yes. Yeah. We're amusing ourselves to death, and that's the real problem. We become intelligent enough to feed ourselves stuff that is bad for us in the long run but feels good in the short run. In a sense, our real problem is a crappy perception of time so that we want instant gratification. Most people know that they care about their families. They want to be healthy long term, and then you look at their behavior. They're so short-term thinking, partly because they were raised to be insecure and they want immediate gratification. In a sense, addiction is a disease of time where you want everything now.

The second version is that I want everything forever. The people who are willing to wait are waiting for the wrong things. You have Brian Johnson torturing himself in order to live longer and torture himself more. Does that make sense?

Yes and no. Yes, I understand why he's doing it, but no, that doesn't make sense to me as to why he would do it.

Yes.

When do you expect to publish the book?

Speaking of impatience, probably not until the spring of 2027. It's due November 1st, but then maybe another place to put AI is in the publishing process. It needs to be read by experts, which is good because I want it to be read by an economist. I'm hoping to get Gary Marcus, certainly some people who have more lived experience with real life.

Various friends of mine do this anonymously. I don't know who they're going to pick in the end. I'm sending them suggestions, but I want people who will make sure that it'sI have a great editor, but then you get five or six independent readers, and then it comes back to their author and they have to fix everything, and then there's all the publicity. Everything you heard today, I'll put on Substack in some form or other, and there's a fair amount more.

Awesome. If people want to read about what you're currently doing, where can they find more about what you're currently doing?

Not much. First I need to get this thing written, then I can start putting excerpts out and getting useful feedback. This is unintelligible. Right now I'm pretty much heads down. It's not fit for print, it's not even really fit for online. Probably starting November 1st, then I start going online. There are podcasts and snarky comments on LinkedIn and so forth.

Are you at least enjoying the process of writing?

I'm loving it because it's not, “Oh, I know this stuff; let me just tell people.” Speaking of quantification, we quantify public welfare by the GDP. I first thought, “Well, at least some large portion of healthcare is actually unnecessary to repair. Maybe we should take that out of the GDP.” I started looking into what the GDP is. It's so wonky.

One thing I say is GDP is like your weight. It's certainly useful to know does it go up or down, but then you want to know why. What a doctor wants to know is, “Well, what about your bone density? What about your muscle mass?” It's the components that are interesting. Is your muscle mass number positive or is it shrinking? That was one rabbit hole.

There are so many others where in order to be useful, I need to make sure I've got my facts right, then I learn stuff, and then I explore more. It's like pretty much a one chart, not for the book. I've done a lot of investing, and this chart basically is earning on the Y-axis and learning on the X-axis, let's say. Usually, the more you earn, it's like, “Oh, it's a success, but it wasn't yours. You got lucky.” I got lucky with Square. I got lucky with Facebook.

The ones where you learn more are often the ones where something goes bankrupt, and you learn about how management can screw things up. It was difficult, but everything I've ever done and the investments I make are notbecause I don't believe, glad myself that I know what's going to win. I do know is this something they're doing that they will learn and I will learn? My favorite investments were the first time around the person didn't work, and then I get to invest in their education when they do something right the second time. Learning stuff to me is another big motivator.

I love it. As we wrap up the podcast here, we talked quite a bit about a broad range of subjects here. Any parting advice for the listeners from your lifetime of experience?

Ask good questions. That's it. Listen to the answers and then ask why again. Don't lose that curiosity. I cracked my femur a while ago, so I was on crutches and things. I ended up talking to someone else. She said, the context was basically, “If you smile at people, they will smile back.” Especially if you're on crutches, they don't want to look at you smile because the nice people don't want to, oh, haha.

If you smile at people, they will smile back because then the smile is a smile of recognition. If you smile at somebody, chances are they'll smile back. Not always, but smile a bit more and you'd be amazed what happens.

I had a wonderful conversation with a cashier the other day because she had a Russian accent. I asked her where she was from. She was from Uzbekistan. I've been to Uzbekistan. In some weird way, that made my day, even though it was a very useful day in other ways.

That ties back into the human connection.

Yes.

Awesome. Thank you so much for coming on the podcast, Esther. I really appreciate your time today.

Thank you. This made my day. Thank you. Take care.

About Your Host

Chris Parker

Chris Parker is the founder of WhatIsMyIPAddress.com, a tech-friendly website attracting a remarkable 13,000,000 visitors a month. In 2000, Chris created WhatIsMyIPAddress.com as a solution to finding his employer’s office IP address. Today, WhatIsMyIPAddress.com is among the top 3,000 websites in the U.S. 

Share Post:

COULD YOU BE EASY PREY?

Take the Easy Prey
 Self-Assessment.

YOU MAY ALSO LIKE

Dan
Ariely

Why You Fall For Scams

Jared
Shepard

Mobile Device Threats

Chris
Kirschke

Past, Present, and Future of AI agents

Cynthia
Hetherington

You Are Traceable with OSINT

Deviant
Ollam

Anyone Could Walk In

PODCAST reviews

Excellent Podcast

Chris Parker has such a calm and soothing voice, which is a wonderful accompaniment for the kinds of serious topics that he covers. You want a soothing voice as you’re learning about all the ways the bad guys out there are desperately trying to take advantage of us, and how they do cleverly find new and more devious ways each day! It’s a weird world out there! Don’t let your guard down, this podcast will give you some explicit directions!

MTracey141

Required Listening

Somethings are required reading – this podcast should be required listening for anyone using anything connected in the current world.

Apple Podcasts User

Fascinating stuff!

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

Apple Podcasts User

Excellent Show

Excellent interview. Don't give personal information over the phone … it can be abused in countless ways

George Jenson

Interesting

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

User22

Content, content, content!

Chris provides amazing content that everyone needs to hear to better protect themselves and learn from other’s mistakes to stay safe!

CaigJ3189

New Favorite Podcast!

Entertaining, educational and I cannot 
get enough! I am excited for more phenomenal content to come and this is sthe only podcast I check frequently to see if a new episode has rolled out.

brandooj

Big BIG ups!

What Chris is doing with this podcast is something that isn’t just desirable, but needed – everyone using the internet should be listening to this! Our naivete is constantly being used against us when we’re online; the best way to combat this is by arming the masses with the information we need to stay wary and keep ourselves safe. Big, BIG ups to Chris for putting the work in for us.

Riley

As seen on

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.
close

Copy and paste this code to display the image on your site

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.

We will only send you awesome stuff!

Privacy Policy

Your privacy is important to us. To better protect your privacy we provide this notice explaining our online information practices and the choices you can make about the way your information is collected and used. To make this notice easy to find, we make it available on every page of our site.

The Way We Use Information

We use email addresses to confirm registration upon the creation of a new account.

We use return email addresses to answer the email we receive. Such addresses are not used for any other purpose and are not shared with outside parties.

On occasion, we may send email to addresses of registered users to inform them about changes or new features added to our site.

We use non-identifying and aggregate information to better design our website and to share with advertisers. For example, we may tell an advertiser that X number of individuals visited a certain area on our website, or that Y number of men and Z number of women filled out our registration form, but we would not disclose anything that could be used to identify those individuals.

Finally, we never use or share the personally identifiable information provided to us online in ways unrelated to the ones described above.

Our Commitment To Data Security

To prevent unauthorized access, maintain data accuracy, and ensure the correct use of information, we have put in place appropriate physical, electronic, and managerial procedures to safeguard and secure the information we collect online.

Affiliated sites, linked sites, and advertisements

CGP Holdings, Inc. expects its partners, advertisers, and third-party affiliates to respect the privacy of our users. However, third parties, including our partners, advertisers, affiliates and other content providers accessible through our site, may have their own privacy and data collection policies and practices. For example, during your visit to our site you may link to, or view as part of a frame on a CGP Holdings, Inc. page, certain content that is actually created or hosted by a third party. Also, through CGP Holdings, Inc. you may be introduced to, or be able to access, information, Web sites, advertisements, features, contests or sweepstakes offered by other parties. CGP Holdings, Inc. is not responsible for the actions or policies of such third parties. You should check the applicable privacy policies of those third parties when providing information on a feature or page operated by a third party.

While on our site, our advertisers, promotional partners or other third parties may use cookies or other technology to attempt to identify some of your preferences or retrieve information about you. For example, some of our advertising is served by third parties and may include cookies that enable the advertiser to determine whether you have seen a particular advertisement before. Through features available on our site, third parties may use cookies or other technology to gather information. CGP Holdings, Inc. does not control the use of this technology or the resulting information and is not responsible for any actions or policies of such third parties.

We use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. For information about their specific privacy policies please contact the advertisers directly.

Please be careful and responsible whenever you are online. Should you choose to voluntarily disclose Personally Identifiable Information on our site, such as in message boards, chat areas or in advertising or notices you post, that information can be viewed publicly and can be collected and used by third parties without our knowledge and may result in unsolicited messages from other individuals or third parties. Such activities are beyond the control of CGP Holdings, Inc. and this policy.

Changes to this policy

CGP Holdings, Inc. reserves the right to change this policy at any time. Please check this page periodically for changes. Your continued use of our site following the posting of changes to these terms will mean you accept those changes. Information collected prior to the time any change is posted will be used according to the rules and laws that applied at the time the information was collected.