Privacy is Dead with Pam Dixon

Hosted By Chris Parker

285
Click Below to Subscribe
“Guardrails aren’t about slowing innovation. They’re about making sure the systems we build today can’t be used to harm people tomorrow.” - Pam Dixon Share on X

Privacy in the digital age has grown from a background concern into one of the defining issues of our time. What began with simple questions about online safety has expanded into a complex, global conversation about how artificial intelligence, biometric data, and massive data ecosystems are reshaping daily life.

Pam Dixon has been at the center of these discussions for more than two decades. As the founder and executive director of the World Privacy Forum, she’s worked across the U.S., Europe, India, Africa, and beyond, advising governments, international organizations, and policymakers on how to create effective privacy protections. 

In this episode, Pam takes us through the history of modern privacy law, the ways different regions approach the challenge, and the new frontiers like collective privacy, AI governance, and health data that demand fresh thinking. She also offers a grounded perspective on how to build systems that safeguard individuals while still allowing innovation to thrive, and why getting those guardrails right now will shape the future of trust in technology.

“If we want digital systems people can trust, they need to be resilient, transparent, and designed with the public interest in mind from the start.” - Pam Dixon Share on X

Show Notes:

  • [4:49] Pam identified privacy risks in early resume databases and produced a 50-page report on job boards, now known as job search platforms.
  • [8:56] Pam now chairs the civil society work at OECD in AI, contributing to the Organisation for Economic Co-operation and Development Privacy Guidelines (first adopted in 1980).
  • [11:17] The launch of the internet marked a major shift in privacy, transitioning from slower, isolated systems to globally connected networks.
  • [11:46] Early adoption of the internet was limited to academia, government, and tech enthusiasts before reaching the public.
  • [12:45] Privacy frameworks were built on Fair Information Practices, developed in the United States in the 1970s by the Health, Education, and Welfare Committee (later HHS).
  • [15:58] GDPR was developed and enforced in 2018 with extraterritorial provisions applying to companies worldwide (General Data Protection Regulation, enacted in 2016 and enforced in 2018).
  • [18:59] Large language models and deep machine learning advancements have created new and complex privacy challenges.
  • [22:06] Some countries approach privacy with more flexibility and openness, while maintaining strong guardrails.
  • [23:37] In June 2023, a University of Tokyo study on data privacy was presented at an OECD meeting, highlighting evolving global strategies.
  • [26:30] Governments are working together on “data free flow with trust” to address cross-border data concerns.
  • [28:09] Pam warns that AI ecosystems are still forming, and policymakers need to observe carefully before rushing into regulation.
  • [28:31] She emphasizes the emerging issue of collective privacy, which impacts entire groups rather than individuals.
  • [29:04] Privacy issues are complex and not linear; they require ongoing adaptation.
  • [30:24] ChatGPT’s launch did not fundamentally change machine learning, but the 2017 transformer paper did, making AI more efficient.
  • [31:53] Known challenges in AI include algorithmic bias related to age, gender, and skin tone.
  • [33:07] Legislative proposals for privacy now require practical testing rather than theoretical drafting.
  • [35:39] AI legislative debates often center on fears of harming innovation, but scientific data should guide regulation.
  • [40:29] NIH reports caution participants in certain medical AI programs to fully understand risks before joining.
  • [41:59] Some patients willingly share all their health data to advance medical research, while others are more cautious.
  • [43:50] Tools for privacy protection are developing, but the field remains in transition.
  • [48:56] Asia and Europe are leading in AI and privacy transitions, with strong national initiatives and regulations.
  • [52:42] The U.S. privacy landscape relies on sector-specific laws such as HIPAA (1996) and COPPA (1998) rather than a single national framework.
  • [54:48] Studies show that wealthy nations often have the least trust in their digital ecosystems, despite advanced infrastructure.
  • [56:19] A little-known U.S. law, A119, allows for voluntary consensus standards in specialized areas, enabling faster innovation compared to ISO processes.
  • [56:48] Voluntary standards can accelerate development in fields like medical AI, avoiding years-long delays from traditional approval processes.
  • [57:32] An FDA case study on an AI-driven heart pump showed significant performance changes between initial deployment and later use, underscoring the importance of testing and oversight.
“Collective privacy is the new frontier, it’s not just about what happens to you as an individual, but what happens to your community when data is used in certain ways.” - Pam Dixon Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Pam, thank you so much for coming on the podcast today.

Oh, well, thank you. It's a great pleasure.

Can you give the audience a little bit of background about who you are and what you do?

I'm Pam Dixon. I'm the founder and executive director of the World Privacy Forum. I am in the US, one of the early people who began working on privacy in the early 1990s. This makes me ancient by privacy standards. A lot of people who are getting into privacy now only have about five to 10 years. It’s been a while; it's been a minute. I wrote a book. I got into privacy because I was at Johns Hopkins University on a fellowship. I sat down and I was working with a group of students, and we were discussing the internet, and I had this strike of lightning, which was, “Oh, my heavens. I bet I can help these students with the internet.” I was very early on the internet and really started using the internet to help my students.

What I realized is as you got on the internet, there were implications that were different in the digital world than in the analog world. As I started thinking about this—I wonder how this changes job search, because at JHU, one of the questions was, “What are the professions that are going to be up and coming?” And I thought, “Well, I wonder how the computer, how the baby internet, will change things.” This led me to start really contemplating, “OK, what does the future of the world look like when it's a digitalized society?” It really became an obsession. I really started doing the early research on that. I came to an early conclusion, which was that it was going to change the world in fundamental ways, but ways that were probably unpredictable.

But I felt it likely that the level of autonomy that I grew up with—because I was primarily growing up in an analog world—that level of autonomy could not be possible in a fully digitalized world unless laws and norms and protocols were moved over from the analog world into the digital world. This was my initial premise for getting into privacy. I wrote a book in 1993 called Be Your Own Headhunter Online. I co-authored it with someone named Sylvia Tiersten, and the book was sold to Random House Times books. At that time, that was a really big deal.

Yeah.

For that book, it’s one of the very first books that ever talked about the World Wide Web and privacy. In fact, it probably was the first one. It was put up for a computer press award and all sorts of really great geeky fun things. In the process of researching that book, I interviewed someone named Beth Gibbons, who ran one of the first privacy organizations in the United States called the Privacy Rights Clearinghouse in San Diego, which is where I lived at the time. I interviewed her and she had some really great stuff to say about workplace privacy, and it just enchanted me, and I'm like, “This is amazing.”

Later on in my own time, not part of any scholarly work I was doing, I decided to write about risks of privacy in job searching mechanisms and ecosystems. What I found was a problem were resume databases. I wrote a really early report on this, and it ended up being 50 pages, and there were a number of—they were called job boards at that time.

Yeah.

Now we would call them a job search site or something like that or career site, who knows what they're called now. But at the time I found that there were a number of them that had very significant privacy and security issues. I mean, we're talking selling raise resume databases off the backend, the hunt head hunters and whatnot. There were no laws at this time in the US. There's nothing to prevent this.

It was the Wild West; it was a root tune in the Wild West. I wrote this report, which by my standards today was just really very clunky and early. But I shopped it around to like the Electronic Frontier Foundation and a couple of the existing ones. Yeah. And one of the people who was working at that time was Richard M. Smith. He was the first internet security geek guru person. Really set the kind of the model for that. He went to the University of Denver School of Laws Privacy Foundation. We need to hire this person. He was the lead director there. They hired me on as a principal investigator, and thus began my technical training on packet sniffing and whatnot, and coding, et cetera, with Richard Smith.

I did that until the really late 90s and then what happened is that after 9/11, I felt that the world was going to change and that all the trends that we were seeing and had documented in the technical world would just accelerate with the passage of PATRIOT one and two. Most people who didn't live through the era I did won't understand the depth to which the United States changed between before the PATRIOT one and two. And after a PATRIOT one and two, it really changed privacy.

I decided that the very best thing that I could do would be to move away from strictly technical research to go into really broad-based research. There was no organization in the world that was doing that. No one, literally no one, even academic institutions weren't doing it. That's why I founded World Privacy Forum.

Our mission and purpose is to do PRI research, privacy research and data governance research in the public interest as a nonprofit organization. That's what we've done ever since 2002. We have a couple of core areas that we work on. We tend to work on complex ecosystems work. Any work on data brokers, identity ecosystems, healthcare ecosystems, financial ecosystems and just in general how privacy works globally and the way these interact together.

Among the early reports we were working on were the intersection—this is in like 2007—the intersection between AI and genetic databases and machine learning. We wrote, I believe, the very first machine learning and AI predictive algorithm and privacy report to exist on the planet, to my knowledge. We intersected that with how that works with data brokers and data laws and whatnot.

Now this was all focused on the US and then we just grew from there. And over time, because I became an in-depth expert, I ended up working with a lot of multi-laterals. For many years—for 20 years now—I’ve been a delegate at the OECD in Paris, France. Over time as I learned more and as I got more experience, I kind of promoted. You don't get promoted at OECD, but I'm now chairing the civil society work at OECD in the field of AI.

It's been a really great perch. I also now have a position at the World Health Organization. I am an advisor to their health data committee, or there's a thing called the Health Data Collaborative, and they have a board, and I'm an advisor to their board on data governance, and I also work directly in their data governance working group.

I also work with the UN in their national statistical division and work on global governance of national statistical organizations and what are proper privacy and data governance controls, because everything is changing. All of these ecosystems that I've mentioned are just changing so much. We've done a lot of work. We have a patients for the US audience. We have something called a Patients Guide to HIPAA.

We literally just finished an update a couple days ago. Massive, massive update that took a very long time. We’ll be getting that published in the next couple of weeks. We have a global study on all the data governance laws in the entire world. It's the only one that exists that's done to what's called the ISO UN M49 standard. I know that's very geeky, but it includes every country. Most of  the research you see on this, it does not include every country.

We include the gaps and the things that are there and the things that aren't there. But anyhow, we have a lot of work on AI that's come out. We did a huge report in 2023 on AI governance tools. And I'm working on quite a few pieces right now on health privacy and AI on something called collective privacy, which I've now taken all over the world to get global feedback on. I workshopped it in the US at the Privacy Law Scholars Conference in 2024. There's a lot of good stuff coming. It's a very, very good time to be involved in privacy.

A lot of stuff there. With the launch of the internet was probably a beginning of a major shift in privacy. What was the framework and the perspective on privacy before the internet launched in the US?

If you look at the internet—let’s put that down to like late 80’s. That's when I got on it. I think that's when most non-government people—you would have to be in academia or a student, but that’s I think when most people who were early adopters got on it. That, or maybe up to like ’91, ’92, ’93. That was the phase.

That's when I first got on before that. Really what you're looking at is a host of laws that came into being primarily in the US in the age of the very, very first, modern age of privacy, which was in the 1970s. The very first privacy law in the world was passed in Hesse, Germany. It's less than a page long. But in the United States, in the 1970s the US passed something called the Fair Credit Reporting Act, or the FCRA.

It is considered to be the first major piece of privacy legislation in the world. It's quite good and it's based on fair information practices—FIPS. And the fair information practices were actually developed in the United States in a health education and welfare committee—HEW—before the agencies changed and it became HHS and Education Agency.

Anyhow, that early HEW report, who had absolutely famous members including Willis Ware. I mean, the fair information practices they were enumerated in the HEW report in the US. The OECD in Paris noticed them, and they're like, “You know what? We like these.” They took them and they put them through a multi-lateral, multi-governmental process, and those became the OECD privacy guidelines. The OECD privacy guidelines are based completely on the FIPs with additional, obviously narrative, et cetera.

They're very important and because it was agreed upon by and ratified by the majority of governments in the developed world at that time, then these percolated into the laws. At the time,  a very, very famous computer guy, Willis Ware, was still alive, who worked so much on the FIPS, they became the OECD privacy guidelines and Willis Ware lived to see them become the Rock of the Gibraltar of EU 95/46.

It was a mid-1990s privacy law in Europe called the Data Privacy Directive. The Data Privacy Directive was the first multinational privacy law that ever passed. From there, you just saw a profusion of privacy laws in the US in health in, like, HIPAA was 1996. You have all sorts of other privacy laws, GINA, the Genetic Information Non-Discrimination Act, and so on and so forth.

You saw all this from the ‘70s all the way through to the ‘90s. You really saw a lot of these laws passing. And then of course, later on, when you get into the 2000s, you get into the beginnings of state privacy law. A little bit, especially in the health area, especially in the financial sector. The big event was right around 2012-ish, 2013-ish when folks in Europe were like, “The EU 95/46 isn't really matching where the internet has gone. Where digitalization is taking us, we need something updated.”

That thus began the conversation around what we know as GDPR or the General Data Protection Regulation. The GDPR—one of the great masterminds behind GDPR was an Italian, the European Data Protection Supervisor named Giovanni Reale. He was a brilliant and charismatic man, and he was the EDPS, the Data Protection supervisor of all of Europe, and he shepherded GDPR until it—he saw it through to 2018 till it was enforced.

He conceived of it as having extraterritorial provision, which meant that the law was based in Europe, but it applied to any company, anywhere doing business with Europe. This created what I considered to be the most significant change in privacy law in the world, which was it incentivized all the other countries of the world, for the most part, to pass equivalent legislation. That's pretty much what's happened.

I spent four years creating the research of what this looks like in the world. Now we're at, like, 165-plus countries, most major countries in the world, and even now getting to the very small countries are passing either GDPR or GDPR clones doing that implementation. And one of the only countries that hasn't done this is the United States as well as the small island nations like Tuvalu. Folks that do not have the economic infrastructure to pass such a law.

But this has created a paradigm where you've got a system and an ecology of privacy ideas that are updated and grew originally from FIPS, but then were expanded and changed and altered with the technology. Now we're sitting at what I call kind of the flower of digitalization. If you look at the internet and where it started in the late 80s, early 90s, it was the beginning of what I would now call digitalization. Digitalization, I always thought it would take about 20 years.

It did. And in fact, I would say it took about 25 years, and you'll remember, like, the big data era. When I saw the big data term flying around, I thought, “OK, now we're getting into critical mass of data. This is just the outgrowth of the internet and better connectivity and better digitalization of native databases, et cetera.” But all of this was if you were going to look at a trajectory, this was all trajectory leading up to where we are right now.

Where we are right now is actually the internet era has flowered in bloom. That bloom has done its purpose. It has fallen away. The actual trees sprouting from that is growing. That really is the blending of machine learning,… Share on X

Where we are right now is actually the internet era has flowered in bloom. That bloom has done its purpose. It has fallen away. The actual trees sprouting from that is growing. That really is the blending of machine learning, artificial intelligence, and digitalization. That's really where we are right now. There's so much confusion around how AI impacts everything and everyone's talking about LLMs.

But the factual basis is that the majority of what's happening is huge advances in machine learning, which have created some very deep machine learning, which have caused extraordinary, extraordinary systemic change across the world, even in parts of the world where you wouldn't think that this is happening. But the impacts are far and wide, and right now, we are in the transition that happens once every thousand years. We're very lucky people.

What were the kind of consumer perspective changes of this initial digitalization of everything, and then let's talk about this next step with AI and what's the next step look like?

The initial consumer perception of all of this stuff was that this is great, right? The internet's great. Fabulous. I remember in the very early days of Google, when they started Gmail, I looked at this and I'm like, “What? They're allowing their advertising engine to scrape mail for keywords. No, this is not OK.” And I wrote this fairly passionate letter basically saying, “Look, this sets a really negative precedent.”

We got 30, Beth Gibbons and I worked on this, and we published it and got a lot of sign-ons internationally, and I got death threats for two years over that letter. But you know what? We were right. Eventually I got a call like a decade later. It was a long time after that and Google was dropping their ad scanning because it did not pass muster in Europe.

It created huge legal problems for them. They stopped it. I consider that a win. That's the right thing. But that one little teeny episode just shows you consumer opinion went from “Yay!” to, “I don't know; not sure.” And the exact same thing, the same thing you saw with social media. Well, let's post everything on Facebook or MySpace or all these other little places.

And then of course, there's been a lot of migration to all the other social media that's grown them since those early days. I think there's a lot of knowledge about social media. Parents used to post their kids’ pictures willy nilly, and now you have children suing their parents for doing so. You really see this continuum of and it's pretty predictable in depending on where you live. That's a caveat, by the way. But in the US you really see this. It goes from, “Yay, this is great innovation,” to, “Oh, what did we do?” We need some guardrails, but in other parts of the world….

In Asia, it’s a very different mindset. It’s so profoundly different. It's almost difficult to describe. They are more flexible in their ideas about privacy. They're not as locked down, but at the same time, they're much more cautious about putting up guardrails, but the guardrails themselves are more flexible. That makes a really big difference. In Africa, you see that depending on what region of Africa you're talking about, francophone, anglophone, et cetera, you will have a different approach.

India, you'll have a completely different approach there. It's the most digitalized society on the face of the earth. They have an odd-horse system, so they're fully digital and biometric, and they have real-time systems, even to the smallest, poorest villages. They have very different ways of doing privacy. They have a Supreme Court opinion that really helped them. There's so much tokenization and de-identification in India.

They have amazing technical privacy protections. You look at Europe and you see a lot of legal protections, but there are emerging gaps because of the way technology is changing. It kind of depends on where you live and what your mindset is, what the conclusion is. But I think that right now what we're really seeing is we're seeing a very significant adaptation to artificial intelligence.

There was a study that came out. I don't know if all of it is public yet, but it was presented to an OECD meeting in June of this year. It was about mid-June, and it was a study from the University of Melbourne. They did trust in, you know, all of this stuff that we're talking about, you know, what is the trust index and then what's the use index? The highest use of super-advanced digital technologies is in the wealthiest countries, as you might imagine, including, especially the US.

But the highest trust of these systems is in the developing world, developing Asia as well as developing Africa and other regions. What you see is actually a really interesting situation where some of the developing nations are earliest adopters of some of the new AI technologies or systems, and some of the developed nations are going, “We're not sure about this yet. Maybe we should wait a bit.”

What you see is actually a really interesting situation where some of the developing nations are earliest adopters of some of the new AI technologies or systems, and some of the developed nations are going, “We're not sure about… Share on X

There is a very real potential for what a lot of experts are calling a leapfrogging effect to happen within the next decade. I think it's very early days right now, even though AI and machine learning have been with us for many decades. I mean, remarkably many decades, but as particularly we can point to the 1950s when the brothers Fair and Isaac Craig came up with their credit score. I think it depends on where you live. It depends on your point of view. But in general, I'd say that most people go from trust to mistrust and wanting guardrails.

Right now, though, the ecosystems that we're living with are so complex. In the early days of computing, when it was DOS for me, I could program DOS pretty easily, but when it went to 64 bit on Microsoft systems, I'm like, “Oh, this is too much for me.” It's really hard to look under the hood of an Apple computer, an iOS system. Our systems are very complex and there's a lot to be said there. It's in the US one of the themes that you see in privacy is control and consent.

Let's control this data flow; let's provide the opportunity to consent. But we're moving into a time when consent is changing radically, and the opportunity to have genuine insight into every single data piece of data flow about you is becoming more and more difficult.

There's a very, very significant movement called DFFT—data free flow with trust. Data free flow with trust is something that's being worked at on a global level amongst a lot of the governments to figure out, “OK, how do we manage this situation, and can it be managed?” There's a lot of controversy around data free flow with trust. People who are on the privacy side look at it and go, “Hmm, not sure about this.”

We think we need more guardrails. People on the innovation side, they're like, “This is good because there's fewer roadblocks.” But I think, actually, we are in such a complex situation, it's almost impossible to characterize everything under one rubric anymore. The geographic differences are quite significant now. Regions are making their own pathway. I'd say the game changer, actually, in privacy is more than anything else, the impact of advanced machine learning.

I'd say the game changer, actually, in privacy is more than anything else, the impact of advanced machine learning. -Pam Dixon Share on X

In a report, Bob Gelman and I—Bob has worked with the World Privacy Forum for many years. We wrote a section of that report; it's right at the beginning. We discuss what I call modern privacy. We described pretty much what I talked about here, which is the lead-up to our modern day. We are in many very meaningful respects, we are at the end of what we knew as a lot of the privacy principles they have flowered.

There are new principles awaiting, but we don't know what they are now. I think there are people who would tell you that they know what they are. I think I see hints of what they may be, but I actually think we need to be very careful right now because we need to look and look at the ecosystems and see what's happening before we just make these assumptions and we're like, “Oh, well, let's just assume that we need this or that.”

I'm not sure of everything we need. I am sure of some of the new problems. And it's one of the things I've been working very hard on. This is the collective privacy problem I was talking about. This is something that's been lurking in the wings for about 20 years, but now we really have a situation. But there are many other aspects of privacy that change, that some get better, some get worse. It's not an all-or-one situation. It’s a very complex ecosystem at this point. I hope that gets at some of the question you had.

I think that kind of helps. It is not a black-or-white; it is not a linear progression.

It’s nonlinear.

It's an incredibly complex system. It's one thing of like gasoline-powered engines. OK, they've been around for 80 years or whatever it is. There's refining. We can make it more efficient, lighter, smaller, faster, better, power ratios. We could tweak this for vehicles that need to tow, we can tweak the engines. It's tweaking a known system. I think with privacy. We don't know how all these systems, what is going to look like in the future. We don't know what is it like. We’re in a stage of “We don't know what to do.”

That's a really great analogy. I really like your analogy a lot. I think it's great. I think from the 1970s till about three years ago—I’m not going to put it quite at 2017, but let's put it at 2022. I think we were working with a known system and we were tweaking it. The advent of ChatGPT did not change the world of machine learning.

Not as I knew it. 2017 did with a paper that really talked about transformers and really articulated a different way, more efficient way, of doing AI, which led to, of course, advances in large-language models, et cetera. But that is not everything. It finally did. ChatGPT, in its inception, in its first burst, really did capture the imagination of people because basically they sucked in and ingested the internet, and no one had ever done that before.

It was audacious in its scope and the amount of money in chips. Sheer computing, sheer compute that it took. I mean, it was audacious and bold, but it does not mean that when we're thinking of privacy, that that's the only thing we should be thinking of anymore. That's kind of the overreaction that everyone literally—pretty much most lawmakers—are having. I think this will come back pretty quickly. There's already a lot of signs that it's coming back off of that hype cycle. Look, it's important, but it's a piece. I think that we do have hints of where the problems are because we have a lot of information about machine learning and about neural networks and how they're operating.

We know where some of the problems are. One of the big problems comes as you get to more advanced science. Let me give you some examples of this. There are GIS systems or geospatial information systems throughout the world. These are used primarily by governments and by multi-laterals for very good purposes. For example, there's a lot of GIS systems in the desert, northern part of Africa, and they track the pneumatic tribes. The purpose for this is to provide healthcare in the places where there are nomadic stops and whatnot.

There are a number of tribes, quite a number of tribes that are tracked now. Each tribe has an ethnicity. Now biometrics have been deeply criticized for having algorithms that have racial bias or racial problems. Actually, it's deeper than that. There are some, it depends on the algorithm and the system, but algorithms can have problems with age.

Either too old, too young, can have problems also with gender, can also have problems with skin tone. The more depth, the more difficulty. But that's just race and gender and age. There's something more, which is ethnicity, actual ethnicity. There are advances in biometrics, particularly in the region of Africa where there are some researchers that now have algorithms that have been tailored to detect very specific ethnicities.

Some of this is now used to track and use with the GIS systems to track ethnic tribes. As they move across Africa. The purpose is to help those people. This is an example of a system that can be used to help or hurt. Right now, it's helping. But can you imagine there are many thousands of ethnicities across the world, and as systems become more attuned to ethnicity and to other deeper aspects, what will be tracked? Will it be health conditions? Will it be who knows what you know? We don't know, but the capacities are going that direction.

We see a direction. Do we know what to do about this yet? Well, I do think that tracking ethnicities and tracking health through biometrics or other means needs to have guardrails. What do those guardrails look like? That I'm not sure of. I think we need a lot of work, a lot of factual work to figure that out, as opposed to jumping to a proposed solution that's an all or nothing, or based on emotion, not on fact.

In most of the scientific world, when you make a proposal for a solution, you have to test that solution. But in the legislative world, it's really in privacy. I mean, there are all these proposals for fixing privacy that literally have never undergone a single test. We're past that era. Now, any proposal really needs to undergo a test of truth. A lot of the AI work, legislative work, that's being done today, people are like, “Oh, we're so worried it's going to hurt innovation.”

Anyone who's actually done the science just kind of looks at it and they're like, “Actually, they apply to like two companies in the entire world. It's almost inapplicable to any AI that's meaningful.” Your question was what's the consumer feeling about this? I think that ecosystems are so complex. People have this vague sense that there's something that could go wrong, but people also have the vague sense that they want to take advantage of these tools. But it's very, very hard to see, “OK, what are the privacy implications?”

We're used to thinking about privacy as our privacy, your privacy, my privacy. But privacy changes. It's almost like it changes color. I've got animals on my shirt. It goes, like, from here to here, from one animal to another in many respects. Well, both animals still exist. There's always going to be a form of analog privacy in what I would call our classical privacy based on FIPS. Those other principles that are very well established and are not going away, by the way.

They're still relevant, but now we're also starting to see additional things that can create problems. I talked about ethnicity tracking through a combination of technologies and modalities. But the other thing I would say is that health is a really important area. Machine learning as applied to health has been giving us a lot of really good advances.

A lot of those advances are coming in research. For example, in the US, there's a program the National Institutes of Health runs called the All of Us Program. Bob Gelman and I have written two entire, very detailed geeky reports about this program. It's a genetic database, and it's a biobank. When it was first put forth, it was not under HIPAA, the federal health privacy law, and it was not under the common rule, which is a very important human subjects research privacy law.

We wrote the [inaudible 00:38:20] report, and I'm very proud to say that they have voluntarily brought it under the common role. That's a really, really good outcome. But a couple of years ago, it came to their understanding that the role of artificial intelligence and machine learning in a biobank genetic collection like they had was going to create potential problems for groups of people, collective privacy.

They did a very brave study, and they studied tribal members, and they did a policy study and a number of others. It was a two-year study. They did a very good job. It was published in, I believe it was 2023. It's among the more important studies published in our era. It really is. And it really provides a pathway to start to understand the kind of the way we can find our way forward.

What they found is that if you are a member of an indigenous tribe or a very identifiable group, it could be genetically identifiable. For example, maybe you have a particular variant of melorheostosis or another very, very rare orphan disease. Maybe you have those characteristics and what they found is that the privacy mechanisms that are available today to us don't cover this.

In other words, when you have provided your genetic sample to a large biobank, when it's analyzed with advanced analytics, which these are all using today, AI machine learning, the ability to identify someone as a member of a group is very, very significant. The thing is, that if, let's say, that you're a member of an indigenous tribe, a Native American tribe, you can be identified as a member of that tribe.

In the past, in the US, past history, there's been stigma associated with that. They wrote about this in the report. I'm just repeating what's written in the NIH report. They said, “Look, you need to really think about whether you're going to participate in this program. Some people want to participate so that they increase the healthcare for their community, their collective group that they're in. Some people want to stay out so that they're not identified to a group.”

For example, I mentioned melorheostosis the last time I checked the database. There are 22 people in the United States that have melorheostosis. It's a very rare disease. Only a million people in the world have it. The basic rule of thumb—it’s a basic rule. It's not always true, but a basic rule of thumb is that when you have a dataset that's very sparse, or very few people are in it, high sparsity dataset, you'll become so identifiable.

We know the names of those 22 people, so to speak.

We know where they live; we know them. When they have kids, and the propensity for their kids to have this. There are still places in our lives where there's information we don't want people to have. People who have significant diagnoses tend to be among those people. Health privacy continues to be a very important area, but it's also a very important area of research. You're going to find people across what you call the continuum or the spectrum.

There are going to be people who say, “Take all my data. I have a terrible disease, and I want to help other people with it. I don't care.” -Pam Dixon Share on X

There are going to be people who say, “Take all my data. I have a terrible disease, and I want to help other people with it. I don't care.” Then you're going to have people who are like, “I'm 35 years old. I have my entire career in front of me. I have kids, and it's genetically linked, and I want to let my kids have a really good life.”

I think I'm going to stay out of this. Everyone's going to come to everyone's mileage may vary; there's variability here. I think that we have to be really careful to understand all of these permutations and complexities. Privacy, when it comes down to it, is highly contextual. This is what makes it so difficult to actually implement and legislate and whatnot. When you add AI to that, it gets really complex.

Privacy, when it comes down to it, is highly contextual. This is what makes it so difficult to actually implement and legislate and whatnot. When you add AI to that, it gets really complex. -Pam Dixon Share on X

Basically, what I would say right now is that we are in a very deep transitional era of great importance. And it's incredibly important that we understand what we're doing, and we move slowly and carefully, but we still need to move and understand that AI will require guardrails. But I'm not certain what all of them look like right now. I know the principles that need to apply, there are implementation issues and there are all sorts of issues. But there’s a couple things I know: This revolution will be digitalized.

If you want to ensure that your systems are private, you're going to be using an AI governance tool to do that. That means AI governance tools had better be properly audited and certified. Right now they're not. -Pam Dixon Share on X

If you want to ensure that your systems are private, you're going to be using an AI governance tool to do that. That means AI governance tools had better be properly audited and certified. Right now they're not. We don't have good systems in place, but I think the tools will be really important. I don't know everything yet, but that's part of the work right now.

Are you excited about where this is going for privacy?

I am thinking.

Or fearful, or is it kind of the, I don't know which one to be yet?

I think neither. I think the most important thing is to understand that human beings are enormously resilient. The systems that we've built in the past have been resilient. However, we have built some deeply flawed systems in the past as well. Everything from political systems to technical systems to other kinds of systems.

We need to make sure that whatever it is that we're building, that we're building things that are beneficial and that have minimal possibility for harm. -Pam Dixon Share on X

We need to make sure that whatever it is that we're building, that we're building things that are beneficial and that have minimal possibility for harm.  In an ideal world, what it would look like is that we're building these systems and we build them for purposes that are used to benefit people and the planet. Both, we need both, but benefit people and the planet. But these systems must be resilient enough to not be able to be turned to be used against people.

We see this exemplified deeply in identity ecosystems today. We all know what happened in Rwanda. In the 1990s, there was a terrible genocide based on their ID cards, which listed ethnicity. That's why ethnicity is just such a big deal. And the Hutu’s and Tutsi’s, it’s literally a massacre, as we all know.

That matter is still kind of bubbling, but the identity ecosystem in Rwanda. I lecture at Carnegie Mellon University in three areas: in privacy, in identity ecosystems in the developing world, and in AI policy in the developing world. CMU Africa is in Rwanda. I had the very great privilege of teaching there in person in 2023. When I was there, I was there to teach identity ecosystems in the developing world.

I invited Josephine Mukesha, who is the National Identity Authority there, to come and teach part of the class, and she was amazing. She spent an hour and a half talking about what she had specifically done to make sure that their identity system could never be abused in that way again. No system is perfect, ever. It does require good human governance, but we can do a lot to build systems that are robust, that create guardrails so that there is less interference.

No system is perfect, ever. It does require good human governance, but we can do a lot to build systems that are robust, that create guardrails so that there is less interference. -Pam Dixon Share on X

India's Aadhaar system, it's very massive. It's the largest ID system in the entire world that is biometric and fully digital and real time. There's really nothing else like it and it covers 1.4-plus billion people—billion with a B. It's incredible, from the largest city to the smallest village that you just wouldn't even believe.

But the system, when it was first built, contained profound privacy problems. If you can imagine a nightmare, it was in that system. If you had to breathe, you basically needed this ID. Then any vendor you went to, like any grocery store or telecom, they could collect your biometric, and then build a new database. All of this was adjudicated by the Supreme Court of India, and there was a very important landmark decision called the Aadhaar privacy decision.

They struck down—I believe it was Article 57 of the law, which allowed that to happen. They mandated that India pass data protection legislation. They mandated that they federate their centralized database and provided significant privacy controls, and they did all of this. And today it's a system that I think is very resilient. It's not perfect. No system is perfect again, but it's really decentralized as much as can be for a centralized system. You have tokenized interactions. Your identity is tokenized, so your identity is not flying all over the place. There are ways of doing this. There's great hope in the world. But we just have to figure out if we have the will to do that and what that looks like.

That's the challenge is do you have the right people in place, the right people who want to put this in place or not, and the collective will to make it happen?

Well, that's a hard question.

I don't know if that's as much as a question, as a statement.

It differs by region. I believe Asia is doing really well—very, very impressed with how Asia is handling this transition. I think Europe is doing their very best. I went to the country of Georgia to Tumi, a very beautiful city on the Black Sea. This was for the spring meeting of the European Data Protection Supervisors and the Georgian Data Protection Authority ran this meeting. They ran probably the best data protection authority meeting I've ever attended. It was so good.

And the conversation was all about what we're talking about today. I believe she's the deputy data protection authority of Italy. She gave the most articulate speech I have ever heard ever about the role of government and the role of people, and the role of data protection authorities, and the way that the law intersects with and interacts with technology. She's very clear, and she said it so much better than I say it, but she basically said in the most eloquent, most fabulous way that look, these two systems do not yet fit with each other at all and there are many significant adjustments that will need to be made.

We'll need adjustments to the law, we'll need adjustments to the technology, and we'll have to work together to understand this. We'll have to be very patient. We can't say lawyers are better. We can't say technologists are better. We can't have this war. We can't say AI is so special that it can't have a law applied to it. We can't say the law is so special that it doesn't have to recognize the challenges with technology. There has to be a melding and great adjustment is required.

I think she really got that right. But the data protection authorities that are in place in, like, 160-plus countries, not all countries with legislation have DPAs, but the ones that do have them, I have to tell you, the DPAs across the world are doing an amazing job of really trying to understand where we are right now. They are the people, I believe, who are going to be very, very, very important in leading us forward.

They're not legislators. They are legal experts, technical experts, compliance experts. In other words, they understand that just because you have a law in the books doesn't mean that it's actually going to be enforceable or that anyone will care. They see the legal side, the practical side, and the technical side. And most of these folks now have deep benches of technical, legal, and policy advisors.

I put a lot of hope on them, I really do. I'm really glad that this kind of ecosystem of data and privacy experts exist. There's some really great work at the multi-laterals as well. Multi-lateral work can get very big picture. That's a bit of a flaw with it. But I think the principles that are emerging are really good. And I'm really impressed with UNESCO's AI impact assessment work. It's really great. I love what OECD has been doing globally with its global private, the Gpay, Global partnership on AI. And I just think that there's a lot of good work happening right now, but some of it really differs by where you live.

Is the US behind the curve or kind of on the cutting edge. I think I know the answer to that question.

It's both. The United States has—you look at the, what I would call the compute map. The United States has the most compute power in the world. The United States has some of the largest models in the world. It has exceptionally good technologists and real power centers for artificial intelligence and machine learning. I would add quantum to this. The US also has a lot of quantum capacity, and that's compared to the rest of the world.

Quantum becomes very important. We haven't talked about that, but we need to. The US is dominant in that way. However, other regions are looking at policy and how this is impacting people in a different way, and how these new permutations of technology are impacting people and regions and countries. It’s quite variable. You have a whole bunch of people who are looking at those pieces. You have civil society. You have academia. Some governments are doing this work.

That report I mentioned that Kate and I did, it's called Risky Analysis. We go through every government that's doing significant AI governance work and talk about the tools they're developing. We're in a moving state. The US, if you ask me how it's doing, it's ahead on compute; it's behind on understanding how to put guardrails up. I think there's great reluctance in the US to slow innovation, but here's the problem, here’s the balancing test. The studies that are looking at trust in digital ecosystem show that the developing wealthy world has the least amount of trust and the ecosystems are being built there, which means that those systems aren't going to be adopted as quickly.

The studies that are looking at trust in digital ecosystem show that the developing wealthy world has the least amount of trust and the ecosystems are being built there, which means that those systems aren't going to be adopted as… Share on X

There's a great balancing. If a legislation is too tight, you will have a problem. If the legislation is too loose, you're going to have a trust problem. That's going to reduce adoption so you end up in two different holes, deficits, for two different reasons. You want to find that sweet spot. And the sweet spot is moving forward really slowly, understanding that we will need guardrails, but they're going to need to be probably very specific to use case, maybe even to in the US I'm speaking only.

Maybe from the states, from a state level. Let me give you a great example here. The Food and Drug Administration, the FDA in the US, they have a medical device program. There's well over a thousand medical devices. Each medical device in the United States has what's called an A119 standard attached to it. A119 is a very unknown US law that provides for creating a voluntary consensus standard for something very narrow and precise.

It's basically, “Here's a heart pump, and let's make an auditable standard for this heart pump. Let's talk about this, let's get this done, and let's get this heart pump out and make sure we have oversight over it.” That's the goal. The whole idea of the voluntary consensus standards is that they're faster than regular standards, like ISO standards that can take years to make. Medical research wants to move faster than that and this was a byproduct of 25 years ago trying to do some modernization at the FDA.

Now what's happening is that as the FDA is using AI in medical devices, what's happening is that you're seeing the publication of A119 VCS standards and medical devices that are including AI and they're fascinating. One recent device—it’s a heart pump because it's just easy to compare. There was a heart pump, and then the FDA published a before and after of what happened to the AI and this heart pump. A heart pump that is using AI to help modulate it. When you first literally are using it in a person, it's going to behave beautifully. You see this nice, clean graph with very little noise or fluctuation in how it's operating. It's just beautiful. Nice lines, clear, clean.

It looks like a ballpoint marker or felt tip marker is drawn like a red or a blue line. But as time goes on, the fit or how well the algorithm functions with the data can go off. In fact, it can and it will. It needs to be adjusted. It's just like a car needs to be tuned or get oil every once in a while. Algorithms need to be adjusted and fit, refit. There's a monitor on these particular heart pumps that give a warning when there's too much noise in the system. Noise is like having dirt in your oil.

If there's too much noise in the algorithm, you can look at the algorithm output. It's like all these lines, all of a sudden instead of one line, you've got, like, 50 lines and they're intersecting and very messy. You do not want this going on. They used an AI governance tool to do noise reduction and bring the noise back down to a good level.

You need a good auditing standard to say, “What's a good level for this noise?” You need that standard. That's not a global standard. That's a standard for that machine. You need a good AI governance tool that will make sure that machine is functioning to its optimum standard. Is that a tool that will be sold to everyone? No. It's a tool that will be devised for this use case only. And, but are they completely necessary? Yes. Is there a law that you need? Well, you need the standard. The A119 law is already in place, so you don't need a new law for this.

In this case, at the FDA, they are functioning really well within what they're doing, and each medical device gets a standard. I think that's a good example of something that's happening right now. What we need to do is ask the questions. Are we looking at a different world where we don't have these big, giant silver bullet laws? Maybe we have more flexible, voluntary consensus standards, but we have really strong auditing requirements, really strong oversight requirements.

Maybe there are differences in how we do this, and maybe it's use cases, maybe it's ecosystems. I just don't know yet. This is now we're in the realm where I just don't know. I think that it would be impossible for anyone to really know everything right now. But I think we can see hints of where this is going,

I think in three to five years, I think we'll have a much clearer idea. But it is so important that we don't jump to conclusions on any side of the argument or any area of the argument. We need to gather facts, gather the research, gather the evidence, and let the use cases talk to us empirically, let the facts speak and try to stay away from, “Let's regulate AI. Let's don't regulate AI.” I mean, these are emotional arguments.

Based on specific situations that happened.

Yes. And there's a billion situations that are happening. That's what we're dealing with. The complexity of what we're talking about right now is so great that it's really difficult for the human mind to fully understand it. We talked about non-linear change. When I talk about non-linear to students, I like to bring up a graph and it shows linear thinking, which has this beautiful, like, ski slope like this, but then non-linear is all of a sudden you're on a high jump. It's so hard.

It's like compounding interest times a billion. That’s where we are with privacy. One of the things I'm doing this summer is I'm doing a lot of writing on modern privacy and what it means. All that's coming out in early September, late August, all in chunks and pieces. But yeah, it's complicated. How's that? But it doesn't mean there's not a through line. I think we'll find that.

If people want to read about your musings—I don't know if musings existed—your thoughts on where we're going, where can they find that or what you're producing?

Yeah. I tend to produce very long works. However, I'm trying to get a lot better at that. We have podcasts and I'm actually in the process of making a very, very short series, like short, in other words, I talk only for 15 minutes and a lot of like podcasts on these musings. Those are coming out in the fall. I also have a lot of op-eds coming out about all of this stuff. I tend to write a hundred-page reports that take years. That's where most of my stuff is, but I think the world has changed. I'm trying to also change with it and write more musings. But you can find literally thousands of articles and things I've written on worldprivacyforum.org. And we just redid our website. It's a little bit hard to find things right now, but we're still building it out.

Awesome. Pam, thank you so much for coming on the podcast today.

Well, thank you for your invitation. It's been great thinking out loud with you.

Thank you.

 

 

Click to tweet: Where we are right now is actually the internet era has flowered in bloom. That bloom has done its purpose. It has fallen away. The actual trees sprouting from that is growing. That really is the blending of machine learning, artificial intelligence, and digitalization. -Pam Dixon

 

 

Click to tweet: What you see is actually a really interesting situation where some of the developing nations are earliest adopters of some of the new AI technologies or systems, and some of the developed nations are going, “We're not sure about this yet. Maybe we should wait a bit.” -Pam Dixon

 

 

Click to tweet: I'd say the game changer, actually, in privacy is more than anything else, the impact of advanced machine learning. -Pam Dixon

 

 

Click to tweet: There are going to be people who say, “Take all my data. I have a terrible disease, and I want to help other people with it. I don't care.” -Pam Dixon

 

 

Click to tweet: Privacy, when it comes down to it, is highly contextual. This is what makes it so difficult to actually implement and legislate and whatnot. When you add AI to that, it gets really complex. -Pam Dixon

 

 

Click to tweet: If you want to ensure that your systems are private, you're going to be using an AI governance tool to do that. That means AI governance tools had better be properly audited and certified. Right now they're not. -Pam Dixon

 

 

Click to tweet: We need to make sure that whatever it is that we're building, that we're building things that are beneficial and that have minimal possibility for harm. -Pam Dixon

 

 

Click to tweet: No system is perfect, ever. It does require good human governance, but we can do a lot to build systems that are robust, that create guardrails so that there is less interference. -Pam Dixon

 

 

Click to tweet: The studies that are looking at trust in digital ecosystem show that the developing wealthy world has the least amount of trust and the ecosystems are being built there, which means that those systems aren't going to be adopted as quickly. -Pam Dixon

About Your Host

Chris Parker

Chris Parker is the founder of WhatIsMyIPAddress.com, a tech-friendly website attracting a remarkable 13,000,000 visitors a month. In 2000, Chris created WhatIsMyIPAddress.com as a solution to finding his employer’s office IP address. Today, WhatIsMyIPAddress.com is among the top 3,000 websites in the U.S. 

Share Post:

COULD YOU BE EASY PREY?

Take the Easy Prey
 Self-Assessment.

YOU MAY ALSO LIKE

Axton
Betz-Hamilton

Familial Identity Theft

FC
Barker

Exploiting Trust (Part 2)

FC
Barker

Exploiting Trust (Part 1)

Zachary
Lewis

Surviving a Ransomware Attack

Dan
Ariely

Why You Fall For Scams

PODCAST reviews

Excellent Podcast

Chris Parker has such a calm and soothing voice, which is a wonderful accompaniment for the kinds of serious topics that he covers. You want a soothing voice as you’re learning about all the ways the bad guys out there are desperately trying to take advantage of us, and how they do cleverly find new and more devious ways each day! It’s a weird world out there! Don’t let your guard down, this podcast will give you some explicit directions!

MTracey141

Required Listening

Somethings are required reading – this podcast should be required listening for anyone using anything connected in the current world.

Apple Podcasts User

Fascinating stuff!

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

Apple Podcasts User

Excellent Show

Excellent interview. Don't give personal information over the phone … it can be abused in countless ways

George Jenson

Interesting

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

User22

Content, content, content!

Chris provides amazing content that everyone needs to hear to better protect themselves and learn from other’s mistakes to stay safe!

CaigJ3189

New Favorite Podcast!

Entertaining, educational and I cannot 
get enough! I am excited for more phenomenal content to come and this is sthe only podcast I check frequently to see if a new episode has rolled out.

brandooj

Big BIG ups!

What Chris is doing with this podcast is something that isn’t just desirable, but needed – everyone using the internet should be listening to this! Our naivete is constantly being used against us when we’re online; the best way to combat this is by arming the masses with the information we need to stay wary and keep ourselves safe. Big, BIG ups to Chris for putting the work in for us.

Riley

As seen on

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.
close

Copy and paste this code to display the image on your site

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.

We will only send you awesome stuff!

Privacy Policy

Your privacy is important to us. To better protect your privacy we provide this notice explaining our online information practices and the choices you can make about the way your information is collected and used. To make this notice easy to find, we make it available on every page of our site.

The Way We Use Information

We use email addresses to confirm registration upon the creation of a new account.

We use return email addresses to answer the email we receive. Such addresses are not used for any other purpose and are not shared with outside parties.

On occasion, we may send email to addresses of registered users to inform them about changes or new features added to our site.

We use non-identifying and aggregate information to better design our website and to share with advertisers. For example, we may tell an advertiser that X number of individuals visited a certain area on our website, or that Y number of men and Z number of women filled out our registration form, but we would not disclose anything that could be used to identify those individuals.

Finally, we never use or share the personally identifiable information provided to us online in ways unrelated to the ones described above.

Our Commitment To Data Security

To prevent unauthorized access, maintain data accuracy, and ensure the correct use of information, we have put in place appropriate physical, electronic, and managerial procedures to safeguard and secure the information we collect online.

Affiliated sites, linked sites, and advertisements

CGP Holdings, Inc. expects its partners, advertisers, and third-party affiliates to respect the privacy of our users. However, third parties, including our partners, advertisers, affiliates and other content providers accessible through our site, may have their own privacy and data collection policies and practices. For example, during your visit to our site you may link to, or view as part of a frame on a CGP Holdings, Inc. page, certain content that is actually created or hosted by a third party. Also, through CGP Holdings, Inc. you may be introduced to, or be able to access, information, Web sites, advertisements, features, contests or sweepstakes offered by other parties. CGP Holdings, Inc. is not responsible for the actions or policies of such third parties. You should check the applicable privacy policies of those third parties when providing information on a feature or page operated by a third party.

While on our site, our advertisers, promotional partners or other third parties may use cookies or other technology to attempt to identify some of your preferences or retrieve information about you. For example, some of our advertising is served by third parties and may include cookies that enable the advertiser to determine whether you have seen a particular advertisement before. Through features available on our site, third parties may use cookies or other technology to gather information. CGP Holdings, Inc. does not control the use of this technology or the resulting information and is not responsible for any actions or policies of such third parties.

We use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. For information about their specific privacy policies please contact the advertisers directly.

Please be careful and responsible whenever you are online. Should you choose to voluntarily disclose Personally Identifiable Information on our site, such as in message boards, chat areas or in advertising or notices you post, that information can be viewed publicly and can be collected and used by third parties without our knowledge and may result in unsolicited messages from other individuals or third parties. Such activities are beyond the control of CGP Holdings, Inc. and this policy.

Changes to this policy

CGP Holdings, Inc. reserves the right to change this policy at any time. Please check this page periodically for changes. Your continued use of our site following the posting of changes to these terms will mean you accept those changes. Information collected prior to the time any change is posted will be used according to the rules and laws that applied at the time the information was collected.