The idea of privacy has evolved. Even from the early days of the internet to today’s complex global data ecosystems, privacy concerns have changed. The rise of AI and biometric data increase those concerns exponentially. Is it even possible to have privacy anymore? Let’s take a look at the current state of digital privacy and where it might go in the future.
See Privacy is Dead with Pam Dixon for a complete transcript of the Easy Prey podcast episode.
Pam Dixon is the founder and executive director of World Privacy Forum, a nonprofit public interest research group. She has been working on privacy since the early 1990s, making her one of the earliest people in the field. As soon as she got on the internet, she realized it would have unique privacy implications. The idea of what the world would look like once everything was fully digital became a passion. In 1993, she co-authored Be Your Own Headhunter Online, one of the first books to talk about internet privacy.
While researching workplace privacy, Pam found significant privacy and security issues in popular online resume databases. She wrote a 90-page report about it. Richard M. Smith, lead director at the Dever School of Law’s Privacy Foundation, was impressed by the report and hired Pam as a principal investigator. There, she learned the technical aspects of privacy. After 9/11, she decided the best thing to do would be to get into broad research. So she founded the World Privacy Forum to do research on privacy and data governance. They research the privacy implications of complex data ecosystems, data brokers, identity, healthcare systems, financial systems, global privacy, AI, genetic databases, machine learning, and more.
The Evolution of Privacy Concerns
The first modern age of privacy was in the 1970s. Hesse, Germany, passed the very first privacy law in the world, but it was less than a page long. The United States passed the Fair Credit Reporting Act, the first major privacy legislation in the world. The Organisation for Economic Co-operation and Development (OECD) in Paris took notice and used it as a foundation to develop the OECD Privacy Principles. These principles then percolated into laws across the world. The first multinational law, the European Data Privacy Directive, passed in the 1990s, making it the first multinational law.
Around 2012, Europe realized that the privacy laws created before the majority of people were online didn’t match the privacy concerns of where the internet has gone. They started the conversation around what is now known as GDPR. GDPR has an extraterritorial provision, which means that even though it’s a European law, it applies to any country that wants to do business in Europe. This incentivized a lot of other countries to pass similar legislation. Now, the only countries that haven’t are small island nations, countries without the economic infrastructure to do so, and the United States.
The idea of privacy has to change along with technology. Just look at how much things have changed since the beginning of digitization in the late 1980s. The internet era has completed its purpose, and now we’re moving into machine learning and AI. Everyone’s talking about large language models and confused about how AI impacts everything. We’re seeing extraordinary systemic impacts and changes, even in places we wouldn’t expect. This is a unique transformation we’re lucky to be seeing firsthand.
We are right now in a transition that happens once every thousand years. We’re very lucky people. – Pam Dixon
Consumer Perspectives on Digital Privacy
Initially, consumers were unequivocally excited about internet technology. When Google started Gmail, though, she realized they were allowing their ad engine to scrape people’s emails for keywords, and that wasn’t okay. She co-wrote a passionate letter about how bad it was, and got death threats for years over it. But she was right, and eventually Google stopped doing that because it violated European privacy laws.
This shows how consumers have become more aware of privacy concerns and more ambivalent about the technology. We saw the same thing with social media. People posted everything enthusiastically on Facebook, then migrated to other social media. Now we’re seeing kids suing their parents over sharenting and more concern about what tech is doing to us. Public perception has gone from “This is a great innovation!” to “It has some harms, and we need ways to be safe from those.”
It goes from, yeah, this is a great innovation, to … we need some guardrails. – Pam Dixon
Other parts of the world have different mindsets, too. Asia has less rigid ideas about privacy – they put up more guardrails but those guardrails are more flexible. India has the Aadhaar system, which does fully-digital biometric real-time identification. Even the smallest, poorest villages are digitized. India also has a lot of digital, technical, and legal protections for privacy. But because of how technology is changing, there are still gaps.
Adapting to AI
At this year’s OCED meeting, the University of Melbourne presented a study they did that was a trust index for AI systems. The developing world had the highest measure of trust in these systems. This has led to an interesting situation where developing nations are the earliest adopters of new AI tech, while developed nations are more hesitant. There’s real potential for these nations to “leapfrog” over others in the next decade.
These systems are very complex. In the US, the themes in privacy around these systems are control and consent. Let’s control the flow of data and give people an opportunity to consent. But the quantity of data is so great that it’s hard to give people genuine insight into every piece of data. Data free flow with trust (DFFT) might be a solution. It allows for easier flowing of data between trusted and verified entities. Privacy advocates tend to want more guardrails than DFFT has, while the innovation side appreciates that it provides fewer roadblocks. But everything is so complex that it’s impossible to come up with a solution that fits everything.
We are in such a complex situation that it’s almost impossible to characterize everything under one rubric anymore. – Pam Dixon
The Future of Privacy
The privacy principles we know today have flowered, and now they’re reaching the end of their usefulness. New principles are waiting, but we don’t yet know what they are. There are people who will tell you they know. Pam thinks she sees hints of them. But we need to watch the ecosystems as they evolve before we make assumptions. Cars have been around for a while, and we can make them better because it’s tweaking a known system. Managing privacy concerns is a challenge because we don’t know where we’re going or what systems will look like in the future. Some aspects will get better and some will get worse – it’s not an either/or situation and it’s not linear.
We do have hints of where some problems are, especially with machine learning, neural networks, and AI. Geospacial Information Systems (GIS) are used a lot around the world, primarily for good purposes like providing healthcare to nomadic tribes. But identifying and tracking different ethnicities is a system that can also be used for evil. Algorithms have been criticized for bias – racial, age, gender, and more. If everything is tracked, even if the purpose is to help people, it could both help and hurt.
Privacy Concerns in Tracking and Machine Learning
A policy study found that if you’re a member of an indigenous tribe or other identifiable group, you could be genetically identifiable. If you’ve provided genetic information to a biobank, if that data is analyzed with AI or machine learning, there’s a good chance you could be identified even if the data is anonymized. If you’re a member of a Native American tribe, if you have a rare disease, or anything else where there’s a data set without a lot of people in it, you could still be identifiable. Current privacy mechanisms don’t cover this concern.
Every individual has their own level of concern about this. Some people with a rare disease are concerned about the privacy of their health data and don’t want it out there. Health privacy is an important area of research. Other people are willing to risk potentially being exposed to help other people who also have that disease. There’s vulnerability there, and we have to understand the complexity of privacy. It gets even more complex when AI is involved.
Privacy, when it comes down to it, is highly contextual, and this is what makes it so difficult to actually implement and legislate. – Pam Dixon
Right now, we’re in a transitional era. It’s essential to understand what we’re doing. We need to move slowly and carefully, but we do need to move. AI requires guardrails, even if we’re not sure what those look like right now. We need to develop principles and apply them. Right now, a lot of this technology has no governance systems in place. But those tools are important if we want to address privacy concerns.
The Importance of Testing Solutions
Any kind of tracking needs to have guardrails. Pam isn’t sure what those should look like, and we need a lot of factual work to figure out what they should be. Any proposed solution should be tested. You’d be surprised how many legislative solutions to privacy concerns have never been tested. We’re past the era where that could work. Proposed solutions must be tested to see if they work.
There are all these proposals for fixing privacy that literally have never undergone a single test. – Pam Dixon
Lots of legislators are worried about hurting innovation. But anyone who’s looked at how it actually applies in the world know it’s almost inapplicable to any meaningful AI. People want to take advantage of this technology, but they have a vague sense that something could go wrong. It’s hard to see the privacy concerns. Guardrails will actually help people feel more comfortable engaging with these technologies.
Privacy Across the World
In Rwanda in the 1990s, there was a terrible genocide based on their ID cards, which listed ethnicity. Pam had the privilege to teach at Carnegie Mellon University Africa in Rwanda in 2023. She taught about identity ecosystems in the developing world, and invited Josephine Acacia, the Rwandan National Identity Authority, to teach part of a class. She spent an hour and a half talking about what she specifically had done to make sure their ID system was never abused like that again.
The Aadhaar system in India is the largest ID system in the world, with over a billion people. When it was first built, it was a privacy nightmare. The issues were adjudicated by the Supreme Court of India, and their decision struck down the law that allowed some of the problems, mandated more privacy legislation, federated a centralized database, and added significant privacy controls. Today, the system is quite resilient.
No system is perfect. It requires good governance and good guardrails. But we can build great systems that do a lot and address privacy concerns. There’s a lot of countries that are already doing it. Asia is doing really well, and Europe is also doing pretty good. We just have to figure it out.
Privacy in the United States
The United States has the most computing power in the world. We have some of the largest models and good power centers for AI and machine learning. We also use a lot of quantum capacity compared to the rest of the world. We’re very dominant in that way. Other regions in the world are looking at policy, how technology and privacy concerns are impacting people, and new permutations of technology, people, regions, and countries. Civil society, academic, and governments are all involved in this work.
The US is ahead in computing, but behind in understanding and implementing guardrails. There’s a lot of resistance to anything that looks like it might slow innovation. But it’s a balancing act. Studies looking at trust in digital ecosystems show that lack of trust means tech isn’t adopted as quickly. If legislation is too tight, it becomes an innovation problem. But if it’s too loose, it becomes a trust problem. We need to find the sweet spot of fostering innovation while protecting privacy.
Learn more about Pam Dixon and the World Privacy Forum at worldprivacyforum.org. There you can find op-eds, reports, and thousands of articles on various privacy topics.

