Site icon Easy Prey Podcast

Is AI Going to Take Over the World? with Daniel Hulme

“AI is getting computers to do things that humans can do. It’s a weak definition. It’s actually about getting computers to make decisions, learn about whether those decisions are good or bad, and adapting themselves so that next… Click To Tweet

Can understanding someone’s digital footprint really make it easier for it to predict what they’ll be doing using AI? Today’s guest is Daniel Hulme. Daniel has a PhD in Computational Complexity and works in the field of AI, applied technology, and ethics. He is the CEO and founder of Satalia, a Tedx speaker, and educator at Singularity University. 

“With AI, you don’t program software, you teach it.” - Daniel Hulme Click To Tweet

Show Notes:

“AI is super complex. They are adaptable and can adapt in ways you can’t predict. It makes it difficult to test them.” - Daniel Hulme Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Daniel, thank you so much for coming on the Easy Prey Podcast today.

It's a real pleasure, Chris.

Can you give me a little bit of background about how you got started in AI and why?

Yes. I've always been interested in what it means to be human ever since I was a young child. When I'm also an engineer, I used to tinker around with computers. The intersection between those two things is AI. When you start to ask yourself what it means to be conscious or can we upload our consciousness to a machine, these are in the realm of AI.

I did my undergraduate in UCL in Computer Science with Cognitive Science, which is an AI before, I guess, it was cool. That led on to my master's, which is in AI. My PhD postdoctoral position is all in AI.

I ran a master's program in a UCL where I had hundreds of students going out there applying these technologies across business. Now I help UCL spin out companies from these technologies. In parallel, I started a company in AI that builds solutions for some of the biggest companies in the world.

That's awesome. Let's start with the fundamentals in case people listening don't know. We think of AI, and I think we immediately jumped to Skynet and self-aware machines. Let's back up before the inevitable and talk about, what is AI? How has it arisen? What is it capable of doing? And what is it not capable of doing?

Yeah, it gets bandied around a lot. Unfortunately, often it's now synonymized with just technology, which I think is hugely misleading. There are some characterizations about, for example, narrow AI, which is essentially getting machines to do very specific things, or general AI, which is getting computers to do a range of things. Then there's a concept called super intelligence or super AI, which is having a machine that is smarter than us.

I don't think that those definitions are particularly useful. There are other ways of looking at AI technologies in terms of the application of them. There are applications of these technologies that automate human tasks. There are applications of these technologies that imitate human beings, which have chatbots and avatars. There are applications that find complex patterns in data. There are applications that do complex decision making.

I actually think that looking at the application of these technologies is a better way of categorizing them. That's the definition of AI that I really like. Most of the industry thinks that AI is getting computers to do things that humans can do, because over the past 10 years, we've managed to get machines to recognize objects, images, and corresponding natural language.

I think that's a weak definition, because I don't necessarily believe that humans are intelligent, but benchmarking machines against humans is a very silly thing to do. It's actually a very elegant definition of intelligence, which is goal-directed adaptive behavior. That's getting computers to make decisions and learn about whether those decisions are good or bad, adapting themselves so that next time, you can make better decisions.

For the most part, most systems in production are not adapting themselves. Even that definition is not necessarily very useful, because most people are not really doing AI to that definition. I like to look at AI through the lens of these emerging technologies that are now enabling us to do things that we've never been able to do before.

When you say that most of the implementation of AI, it's kind of on the automation and imitation stages?

It is, and then of course, we can automate things with if-then-else statements. There's a lot of humor around the internet about AI being an if-then-else statement. What we've been able to do is automate things using some interesting technology. Of course, we can automate some of the tasks that humans do by replicating their ability. Reading natural language, recognizing objects and images, and then transferring that information to other systems, which is essentially task automation or maybe even robotic process automation.

We can now get machines to understand the world in ways that we've never been able to do before. -Daniel Hulme Click To Tweet

There are things that machines can do that humans will never be able to do. For example, extracting complex insights from data. We can now get machines to understand the world in ways that we've never been able to do before , and then explain that world to us to help us understand the universe better. We've also now got machines to start to make complex decisions in ways that we've never been able to do before  as well. Those are the lenses that I like to look at these technologies.

We've also now got machines to start to make complex decisions in ways that we've never been able to do before. -Daniel Hulme Click To Tweet

Can you provide maybe a real-world example of looking at vast amounts of data and coming up with something that humans can't see?

Yeah, an interesting trend at the moment in industry is looking inwards in organizations about how to use our digital footprint, our emails, our Slack, our Zoom—all of that data that we're emitting now working from home, and how we can use that data to understand people's skills, their aspirations, their career development, their relationships with other people, whether people are good at giving feedback, whether people are good at inspiring others. We can now look at this informal network that used to exist in the physical world and extract insights in ways that we haven't been able to do before.

It does raise some interesting ethical questions. For example, you can identify people in your company who are secret lovers or who are going to leave the company before they know they're going to leave the company. We can extract insights now that should be scrutinized from an ethical perspective.

I guess the reverse of that ethical perspective is you're seeing that with the AI has seen that Bob looks like he's got a six-month horizon, then he's out of here. Maybe that's a benefit to get Bob out, and then they use that same kind of AI to get him to find something that he's actually going to want to do.

One-hundred percent. Actually, that comes back to maybe demystifying some of the stuff out there about AI ethics. There's a lot of material people calling themselves ethicists. I actually would argue that there is no such thing as AI ethics. I think most of the challenges that we're facing in AI are actually safety problems. Even the concept around bias is not an ethical question. I have built a system that's meant to behave how I want it to behave?

Ethics is the study of right and wrong. It's the intent that needs to get scrutinized. The example I like to use is imagine you're on the ethics committee of a ride-hailing company. You've deployed an AI and the intent of that AI is to set prices. The AI realizes that when your battery on your phone is very low, you're willing to spend more money on your ride.

What it has done is it identified a vulnerability in humans, and it's exploiting that vulnerability. It's achieving the intent because it's maximizing profit, but you, as an ethicist on the Ethics Committee, need to be comfortable with whether you're happy with exploiting that vulnerability. It's not about the data that you keep; it’s about the intended use of that data.

I could use the battery data not to get people to spend more money on their rides, but I could use the battery data to prioritize rides. If you are vulnerable, or to make sure that you've got a ride that's got a charger in there, it's absolutely right. It's all about the intent. Intent is scrutinized primarily from an ethical perspective. It's not an AI that actually forms the intent. The human beings form the intent that AI helps achieve it.

It's the human design of the AI versus what the AI is actually doing.

Indeed, yeah. Unfortunately, AI is now super complex. They are adaptable. They can behave in ways that you can't predict. They can adapt in ways you can't predict. They're becoming much more difficult to test, essentially.

We used to be able to write software and create test cases and things like that to try to determine where it might misbehave. With AI, it's very, very difficult to do that, which is where we're seeing some of the failure points and things like that. But it's primarily to do with safety and testing as opposed to ethics.

The misbehave is not necessarily the AI acting up on its own, so to speak, but behaving in a way inconsistent with the way that we're looking at it—inconsistent with what we've designed it to do.

Absolutely, and those are the people that might have read the book by Nick Bostrom, Superintelligence. The example in there that he uses is that you could build an AI that, for example, is very good at creating paper clips. That AI could adapt in ways that it starts them to accumulate all of the resources in nature, to manufacture more paper clips, and then destroy humanity. All it's doing is achieving the goal you gave it, but you didn't give it the parameters to work with it.

Give me just enough paperclips to meet my needs as opposed to fill my stockpiles of paper clips.

Exactly.

It really sounds like the design of the AI in terms of the learning is not the problem, but it's the putting the framework, the buffers, the guidelines around it. That's the problem.

Indeed. We have a well-established set of test harnesses and approaches to determine whether the software is meant to behave or is going to behave how we want it to behave. But as I said, with machine learning, which are some of the underlying technologies today, they're not necessarily AI themselves. You don't program software, you teach it.

That lends itself to more interesting, challenging ways to determine whether those systems are behaving themselves, which is where things like explainability is important. Once you've taught a blackbox how to distinguish between an apple and orange, how can you get it to explain how it's making that distinction that helps us understand where it might actually go wrong?

Is that different from machine learning?

No. Machine learning teaches software how to essentially distinguish between things. If I wanted to teach a machine-learning model to distinguish between an apple and orange, I would show a picture of an apple and see what it said. It might say, “orange,” and I say, “No, it's not an orange. It's an apple.” And then I show a picture of an apple that says “orange.” Over time, I teach it and I give it a reward if he gets it right, and I punish if it gets it wrong. Over time, it gets better and better at being able to distinguish between apples and oranges.

There will be scenarios, potentially, where you show it some sort of orangey apple, and it will get it wrong. You might argue that's a bias, but all machine learning is a generalization of the world. It tries to generalize the world. By virtue of that, it is biased. They will often get things wrong, but they're not behaving unethically.

Yeah. I remember hearing about a study done at Google where they took a large sample of people with and without a variety of diseases and did retinal scans of them, and then fed this all to the computers and they were able to determine things like, “Oh, this person has this condition. This condition. That condition.” Based purely off of retinal scans. To me, it was like, “Wow, that's an incredibly unobtrusive way of checking someone for a variety of diseases that you would normally have to poke and prod them for, but just showing them a retinal scan.”

Indeed. That's an example of where we're now able to extract complex insights from data that we've never been able to do before. Maybe human beings would have never been able to extract those insights before. We do have to be careful about how those systems are making those decisions.

I think this is more hypocritical and true. But the example I read many years ago was I think that the military wanted to train an AI on distinguishing between their tanks and the enemy's tanks. They showed it lots of pictures of their tanks and lots of pictures of the enemy's tanks, and the AI got very good at distinguishing between the two. They put that brain in a drone, they sent it to the enemy territory, and the drone bombed all of the tanks.

They tried to understand why. It was because when they were showing them the tanks, it was distinguishing them not on the tank, but the color of the sky. The enemy tanks tend to have enemy sky color. You have to be careful about how the AI is actually making its decision.

That comes back to the AI being able to explain why it has come to this conclusion.

Exactly.

There's also some of a confirmation bias, potentially, that you have a bunch of people who die from a disease or die. You do an autopsy and say, “Oh my gosh. They all have inflamed livers.” The assumption is they all died because of the inflamed liver. Anytime now, you go look for someone with an inflamed liver. You now treat it because you think it's unusual when it turns out, that doesn't actually cause this.

Indeed. You used the right word there, which was caused. Does correlation equal causation? The answer is no. The Greeks used to think that the movement of the trees caused the wind, and it caused the other way, other way around. Again, which is why building explainable systems is incredibly important.

I would controversially argue, people, companies, don't have machine-learning problems, they have decision problems. -Daniel Hulme Click To Tweet

I should say, everybody's very excited about machine learning, data science, and extracting insights from data. I would controversially argue, people, companies, don't have machine-learning problems, they have decision problems.  Decision making is actually a completely different field in computer science. I guess if you're old enough, it used to be called operations research. It's discrete mathematics. It's very different from machine learning.

Actually, optimization, these types of problems are much more explainable. When I approach building AI solutions for organizations, I typically start out with, “What's the decision that needs to be made? How can we build an algorithm that solves that decision? And then what insights can you help make better decisions and work backwards?” You want to try to automate the decision first, and that tends to be much more explainable than these blackbox algorithms for insight extraction.

Do these potentially run the risks of kind of runaway processes that the AI gets kind of in a feedback loop that says, “When the prices go down, therefore, the prices go down, therefore, it sells more”?

Indeed, yeah. This is why you have to be careful about ensuring that there are, in some cases, humans in the loop. Sometimes you want to have human beings validating the recommendations from the AI. As you get more and more confident in the AI's ability to decide, you then have the AI in the loop. You essentially have the AI making decisions, humans validating, and catching the outliers.

Again, we see a lot of organizations adopting AI and not having the checks and balances. Again, the example that I like to use is if I took the past two years’ worth of weather data and looked at the number of ice creams I sold on each day, I could build a nice model that predicts how many ice creams I'm going to sell given the temperature tomorrow.

I had a freak event, if tomorrow is going to be the hottest day ever on record, it's a data point that existed outside of my model. My model will probably predict that I'm going to sell lots and lots of ice creams. But the reality is, I'm not going to sell any, because people are not going to be able to go outside to go and get the ice creams, which is why it's very important that we have human beings or domain experts try to understand and interpret these patterns, these models that the algorithms are able to find from data.

So more of it are AI-assisted decisions as opposed to AI making the decisions?

Yeah, although, again, there are certainly scenarios where human beings are terrible at making decisions. I can give you lots of examples of where that's the case. I think the art is actually identifying what is the right tool. I don't want to call it a human being's tool, but what is the right technology or approach to actually solve these problems?

Unfortunately, I'm seeing how organizations just kind of go blindly into it and try to adopt or embrace these kinds of emerging technologies out of really thinking, “Is that the right approach and solve my problem?”

It's just the assumption that the latest technology is the right technology.

Indeed, yeah. I used to have students that were applying these technologies in the real world. I used to get them to try to apply a whole range of different approaches from the simplest to deep neural networks. While some of those more sophisticated approaches might make small improvements, actually, the reality is that you want to implement a simple one, because you need to maintain it, support it, and all that kind of stuff. It's actually much more cost-effective to have simpler solutions in some cases than more complex ones.

Are there any myths around AI that we haven't talked about?

Oh my God. There are so many. I think that one of the things I tried to do is empower decision makers with a cheat sheet to try to sniff out what's real and what's not. Unfortunately, I think that 99% of the stuff out there is really smoke and mirrors or the wrong approach. I'm worried that, unfortunately, that's going to cause a bubble over the next several years. That means that people will lose faith in some of these technologies, when actually there are other organizations out there that are doing it properly.

People are marketing something as AI when it isn't. When it fails, people look at AI in the future like, “Ah, it has been technology. It doesn't really work.”

Indeed, and you can't blame them. We all want to get more clients. We all want to get more VC funding, so we do jump on the bandwagon. I guess my concern is that these technologies are incredibly powerful. In the wrong hands, they can cause a huge amount of damage.

Let's talk about what is in the wrong hands and what is the damage. “We'll never do the wrong thing. It’s them.” Who's the “they” in this?

Indeed. I have kind of a macro view of this. I'd say it's macro concerns around the impact of technology on society. I've kind of got a micro view, which is how and where organizations are failing to implement these technologies correctly. What would you like me to focus on, the macro or the micro?

Let's start with macro first and then we'll go micro.

The macro, we may have alluded to one concept, which is this concept of superintelligence. A superintelligence is where we build a brain smarter than us in every single possible way. This is the last invention that humanity will create. Some people think it's going to be the most glorious thing that happens to us, and others think it's our biggest existential threat.

This is often referred to as the technological singularity. A singularity is, essentially, a point in time that you can't see beyond. If we build a superintelligence, what will happen after that? The reality is that we don't know.

I actually think that there are six singularities. I've tried to use the business framework PESTLE, which is a macro framework to try to articulate these six singularities. Just very, very quickly, the first one is political, which is the point in time where we can't determine what is true. Deep fakes, chatbots, misinformation bots—these are challenging, not just our political foundations, but they're challenging the fabric of our reality.

I heard recently there was a machine-learning model that was trained on the voice of a CEO. They got that machine-learning model to call up the accounts department and get them to pay an invoice. That cost the accounts department while they were talking to the CEO, and that could have well been deep fake. They thought they were speaking to the CEO. There's a point in time where we might lose faith in the authenticity of the content that we're engaging with.

The second singularity is actually my favorite, which is the economic singularity. Maybe we'll talk about this later on, which is the point in time where we have essentially mass technological unemployment. These technologies are freeing people up from tasks. They're freeing people up potentially from whole jobs. The concern is that we might see a huge amount of social unrest if people lose their jobs.

I controversially think that we should be accelerating towards this economic singularity. AI is amazing at removing friction from the creation of goods and dissemination of those goods, like nutrition, health care, and education. There's a hypothesis that we can take all of the friction out of the things that we depend on—energy and food—and make them free. There's a hypothesis that we could create a world of abundance, a world where everybody has access to things that they depend upon for free, giving them the economic freedom to do what they want. I think that's quite an exciting world.

Social singularity, I won't bore you with these, but it's where we cure death. These technologies are very good at monitoring people and making decisions about how you can stay alive. Technological singularity, I've already mentioned to you. I mentioned about where we build a superintelligence.

The environmental singularity, again, is something that we're all familiar with, which is these technologies are allowing us to consume more, to produce more. Consumption is potentially putting pressure on our planetary boundaries. We could create uncontrollable ecological collapse. I think that these technologies can also solve some of these climate-change problems.

The final one is the legal singularity. PESTLE is the framework, which is where surveillance becomes ubiquitous. Again, this isn't the press a lot. You could argue that there are a handful of companies, a handful of governments, that know so much about you, and have the ability to manipulate your decisions that then allows them to accumulate wealth and power, and that's an incredibly powerful position to be in. Those are kind of the macro PESTLE concerns that I have.

Some of the safeguards—there are very different safeguards for each of these situations. We need to have safeguards against our government having too much of ours, yours, mine, and governments having too much power. We kind of see where that leads us. Are the safety mechanisms doable in each of these areas?

It's really interesting, because the bad guys don't have to worry about the safety mechanisms. They can afford to get these things. Unfortunately, we also have this impulse to build very intelligent machines. We are accelerating towards the technological singularity.

Of course, we have an impulse to reduce costs in our companies, because we need to increase profits, because that's how our economic models work, so you end up having job losses. The way that the world is set up is actually to accelerate these different singularities to potentially negative states. Whether governments can prevent that, I don't know. I actually think that it's enterprise that might solve it.

I think organizations are realizing that, of course, you need to have a profitable business model. But if you want to attract and retain talent, you need to also have a strong purpose. Without that, you're not going to be able to survive. I have an inkling that it's this collective purpose of all of enterprise that might actually make these glorious futures. What we can do is hold leadership accountable to achieving that purpose. That's what we can do as employees.

That does really seem to be the case. Not as much when I was growing up, but I think now, people are more in place of more value. I'm not saying whether they should or they shouldn't, or whether it's too much or too little, but there is more placement and value on, “What's the big picture of what this company is doing?” Not, are they effective at building widgets, but what's the bigger cause?

Tesla. Our goal is to accelerate the adoption of clean energy. The ways that we're going to do it are solar, battery, electric. People see the big picture and hope that the execution of that works in a way to support that goal. People definitely seem to be more interested in kind of the big-picture goals now, not necessarily, “Are they going to provide me with a good 401(k)? Is it going to be of specific economic benefit to me? I don't really care what they do, as long as I get mine.”

Yeah, maybe it's cyclic. Maybe people will change their opinion 30, 40 years from now when they can't afford to buy a house. There's definitely a trend towards, I guess, feeling like you're doing something that positively contributes to humanity.

Going back to this concept of the economic singularity, people often say to me, “Daniel, I'll just get bored and unhappy if I don't have to work.” I'm like, “Well, I know lots of people that don't have to work. They've become economically free, they're high net worth and philanthropist, and they're not sitting around depressed. They're using their time and assets to try to contribute positively to humanity.” I believe that we all have an innate desire to try to make our environments better for each other and the future generations.

It's that, OK, if you were freed up from your time and monetary constraints, how could you give back to your community?

Yeah, and I'm not even saying that people should do it. You could go and play the guitar or you can go and travel the world. For the most part, people do a hybrid of those things. They indulge in their hobbies, they travel, but they want to try and contribute to their communities and society.

I think having that economic freedom gives people the opportunity to do that or those that want to give back to society—“Well, I'm now freed up. I can actually go do that.” Whereas, “I really wish I could help out with this particular group of people, with this issue that they're facing, but I've got my nine-to-five that I've got to do. I've got extra bills, because I want to travel, and therefore I just don't have the means to do these things.”

Yeah. This comes back to this question of, “Who's going to make this glorious future for us?” Is it going to be the government? Is it going to be Silicon Valley? Is it going to be tech companies? I don't think the answer is yes to any of those things.

If we can free as many people as possible, I think that we should be tapping into the creative capacity of humanity and enabling them to bring innovations to market as frictionlessly as possible. We can't just rely on a small pocket of people around the globe. We should be trying to enable everybody to think creatively about how to enrich each other's lives. That's my 40-year plan.

That does seem to be the direction that things are going. The pandemic has definitely accelerated this whole concept of not just domestic work-from-home, but, “Well, I can hire someone halfway around the world who is really good at doing exactly what I need them to do, and they don't have to be in an office next to me.” Suddenly, you're giving opportunities to people who are geographically challenged.

That's not the right phrase, but they're not geographically close to the company, and that's no longer as much of a problem as it used to be. That's good if you're in an economically geographically depressed area. But when you're in the Silicon Valley, and all of a sudden as well, we can replace you with someone for a quarter of the price, there's a rebalancing that's going to happen.

Indeed. Actually, I did a TED talk on this. I do think that it will rebalance. I think that things will start to equalize in terms of pay. It touches upon a concept called decentralization, which actually challenges what it means to be a company.

Rather than having an entity or actually a group of people that come together, swarm together to get an innovation to market, and you're fairly remunerated for that contribution, you actually have the ability to contribute to one project today and a different project tomorrow. You're not necessarily wedded to one company. It's like an ultra gig economy.

This is a concept called decentralization. Actually, I tried to organize my company in a decentralized way. What we're trying to do in my company is create a platform for people to get innovations to market as fast as possible. My goal is to scale our platform to our planet. That's my aspiration.

No bosses?

No, not really. I guess going back to this concept of organizational network analysis, if you google “boss leader,” you get a whole list of things, characteristics of some good, some bad of what a boss is meant to do. The reality is that it's hard to find those characteristics in one person. You've got people in your organization who inspire other people and that you go to for advice. What you should be doing is tapping into that community in a more decentralized, granular, fluid way, unlocking the capacity, the value from those people.

Got you. Let's go back to AI before we start going too far down that rabbit hole.

Indeed.

Maybe this is some of the ethics problem or the rebalancing issues. I'm a content writer, let's say. I'm a writer, and now AI can write. I go online and say, “I want an article on how AI is impacting automation.” The AI writes this nice little article for me, and the writer is no longer there, because it's being done by AI. What careers do you see being impacted by different types of AI? How do those people that are in those careers either adapt to benefit from it, or don't they?

I've not seen a huge amount of disruption from those types of roles at the moment. That might change very quickly as new language models appear over the next several years. I believe there are going to be orders of magnitude better than the previous ones. You might find that those content creation people are redundant.

Same for driverless cars. We've been talking about driverless cars for a long time. The interim step to a driverless car is that you have an autonomous vehicle. But when the vehicle needs to do something difficult like park, it goes to a human being that's sitting at home that is able to navigate that vehicle from home. That individual could drive 17 different vehicles instead of one.

I think that there is a genuine concern that these technologies will create job losses. When? I don't know—maybe 10 years, maybe 15 years. There are people that have said, “Look at every single industrial revolution that we've had. Yes, there's been job losses, but then people have been able to retrain and do other interesting things.”

I have a concern that any new interesting things that we create, AIs could end up doing them. People will be able to build AIs faster than we'd be able to retrain people. Maybe I'm not thinking creatively enough. Maybe if we're able to unlock the creative capacity of human beings, they're able to then figure out how to tap into the latent capacity of human beings, but I'm skeptical. I would prefer to actually have an AI therapist or maybe an AI clean me when I'm old and decrepit rather than a human being.

What we might start to see also are some interesting new economic models. Other governments have also talked about universal basic income and these types of things. The example I sometimes use is that let's assume that I didn't have the money to go and cut my hair. I would be able to go and walk into a hairdresser.

Instead of me paying for that service, I sit there for the 20 minutes, 30 minutes, that I'm getting my hair cut, and I'm being useful to somebody. Maybe I'm part of a focus group, or I'm tagging some images, or helping somebody to pronounce a word, or I don't really know. How can we distribute work in a granular way so that I can contribute to something and they can pay for my hair being cut?

Going back to the barter system, is what you mean?

Yeah. And again, I don't know whether that's just cyclic, but I think that there are ways of granulating work. Rather than me paying whatever, the $15 a month for Netflix, could I do $15 worth of work, and that pays for my Netflix? Then that work might even be something they enjoy. It might be teaching somebody how to play some chords on the piano.

Or you look at it as these services start to become incredibly inexpensive. With most businesses, the most expensive element of the business is the labor. If all of a sudden everything is autonomous, then you have fixed asset costs as opposed to variable labor costs.

That's what I was alluding to earlier about the economic singularity of the world of abundance. When I talk about friction, I probably am referring to human labor. You could imagine a scenario, and I'm sure that there's been plenty of sci-fi books and movies about this, where AIs are planting crops. The crops are growing, they're harvesting them, they're disseminating them using drones to the rest of the world. They're fixing themselves using robotics.

Actually, you don't necessarily need to have a human in the loop to produce food and disseminate that food to everybody in the globe, because you've removed all of the human labor. Now, who's going to invest in creating that infrastructure? I don't know. As a species, we should be investing that stuff. Maybe the billionaires will go and do something philanthropic. But I think that the future is a possibility, a world of abundance.

It's definitely the future that we want to see, and that is no starving kids halfway around the world and everyone is getting their basic necessities met. It's a rosy picture, and it's one I think everybody wants to have happen.

I'm sure that many people listening will be having words trigger in their mind, like socialism, communism, and all this kind of stuff. I get these questions all the time. Of course, I've been thinking very deeply about this for over a decade. It's not socialism in the sense of you're not taxing people to go and redistribute that wealth somewhere else. You've built an infrastructure that means that people don't have to work to feed themselves.

Is it socialism if money doesn't exist anymore?

Indeed, exactly.

I think that's what you're talking about. These fundamental paradigm shifts of how we look at what work is, what money is. It’s Star Trek. No one ever has money. No one is exchanging things. People are just doing what needs to be done.

I'd love to use that word, actually. There is this concept of abundance that's often referred to as the Star Trek economy. In Star Trek, you had these replicators that you'll say, “Make me an Earl Grey tea, hot.” And it would make you that food.

No one's having to put star credits in the machine to make it happen.

Indeed, exactly. Humans are then judged, maybe even sexual pressure, on not how much wealth you're generating, but actually how much you're contributing to the rest of humanity, which I think is a good cycle to get into it.

It really is. Maybe it's not so much a shift in our economic system, but in our value system. What do we value? What do we look at as success? What do we look at as productive? Maybe there's a resurgence in the arts.

I hope so. Yeah, absolutely. If you look at the things that humans value,  the bottom is breathing and all that kind of stuff. Then at the top, it's self-actualization, aesthetics. It's true. There are things out there that we could be enjoying that are enriching our lives. Unfortunately, a very small portion of people are born to this planet to experience that. That's a real shame.

Yeah. What are the risks of the opposite? What are the warning signs that we're going in the opposite direction of this? The Skynet.

I think that there's obviously a lot of scaremongers in the media. I think that we will probably feel like we're facing an economic downturn. We've got obviously the things that's going on in Russia and Ukraine, so the world sometimes feels very negative. But I think we're seeing more and more people be released from poverty. More and more goods getting to people are enriching their lives, more advances in medicine.

Whilst at the moment, maybe only a small handful of people are benefiting from that. I do have hope and faith in humanity, and that will continue to grow. We will start freeing people up more. I feel like we're going in the right direction. I hope that we don't kill ourselves between now and that future.

When it comes to the Skynet question, I think the thing there is very hard to empathize with superintelligence. But if I was a superintelligence, and I was kind of created and suddenly appeared in this world, if I felt that humanity was a threat, I would remove it from the equation very, very quickly.

I think that if we're still fighting each other over GDP, resources, and water, which is something we're going to be fighting for very soon, I think that this thing we'll see as a threat. That will be a shame. How do we, over the next 30, 40 years, get humanity cooperating better as a species?  Concepts like decentralization challenge the boundaries of companies. They challenge the boundaries of countries. I think that's one step towards the right direction.

Yeah, it seems that once you start eliminating the things that make us go to war with one another, most of them are economically related and the religious and the social ones become more complex to deal with. I think when you're eliminating the bickering over resources, it's definitely, “Why do I need to invade another country? I don't need the resources.” The resources are plentiful, abundant, and decentralized anyway. It is a rosy future that I will choose to look forward to.

I think within all of our gifts to try and make that future a reality. I think we can all try to make decisions every single day to help move towards that glorious future.

I guess the opposite of that is if we invent Skynet, and it decides to get rid of us, it's probably going to do it very, very quickly.

Indeed, yeah. We won't know anyway.

That's a funny way to look at it. Is there anything that we've missed that you're like, “Oh, wait. I have to tell this before we close out”?

Hundreds of things, but this is a good start. I guess if you get any feedback on this session, if people want to hear more, we could always do a deeper dive.

Speaking of, if people want to learn more, how can they find out about you, what you're doing, what your company's doing, and more about the singularity in general?

You can always connect to me on LinkedIn: Daniel Hulme. You can also email me: daniel@satalia.com.

Awesome. We'll make sure to link those in the show notes. I super appreciate your time today.

Thanks, Chris.

 

Exit mobile version