AI creates opportunities as well as challenges. We may need to become more skeptical about what we see and hear knowing that images and words can be generated without transparency of its creation. Today’s guest is Paul Valente. Paul is the CEO and co-founder of VISO Trust, a former CISO of Restoration Hardware, Lending Club, and ASAPP with over 20 years of technology, financial services, ecommerce, and information security innovation. Paul holds several industry designations, including Certified Information Systems Security Professional (CISSP), Certified Information Systems Manager (CISM) and ISO 27001 Lead Implementer.“As a consumer, when you work with a company or interact with the website or a business and you share your data, you are actually sharing it with a whole ecosystem of other companies.” - Paul Valente Click To Tweet
- [1:02] – Paul shares his background and what he does now in his career.
- [3:23] – One of the key learning opportunities for Paul was being a victim and being involved in scams and cyberattacks.
- [5:28] – It’s getting harder and harder now to tell something’s legitimacy with the ability to use AI to generate content for scamming.
- [7:54] – As a consumer, when you work with a company and share your data, you are sharing it with the whole ecosystem.
- [10:16] – It is very hard for security to be managed when there is so much data.
- [11:58] – Surveys sometimes give a false sense of security.
- [14:21] – At VISO, they remove the friction in order to make the process scale. There are so many third parties and vendors. Focus on real information.
- [17:02] – Security is not a solved problem. There are always imperfections.
- [19:07] – There’s a variety of different responses to cybersecurity breaches.
- [20:46] – Companies who are transparent about breaches tend to be seen as good companies. It’s how you handle it and take steps to communicate.
- [23:27] – We’ve been trained to look for errors but today, with the use of generative AI, it is easier for scammers to create perfect messages.
- [25:30] – We need to learn ways to improve our ability to discern real content from fake content.
- [26:46] – AI also creates unique opportunities.
- [29:47] – We still tend to have the idea of AI being a sentient being based on science fiction. So what is AI?
- [31:12] – It’s all about shrinking the problem space.
- [33:17] – AI growth and what is called the Cyber Kill Chain will happen incrementally.
- [34:55] – Be aware of where you are communicating. You will need to look hard when it comes to social engineering.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- VISO Trust Website
- Paul Valente on LinkedIn
Paul, thank you so much for coming on the Easy Prey Podcast today.
Really glad to be here, Chris.
Paul, can you tell me a little bit about what you do and what your company does?
Paul Valente, CEO and co-founder of VISO TRUST. Prior to founding VISO TRUST, I've been a longtime chief information security officer and security professional. I was the CISO at companies like Restoration Hardware, Lending Club, and ASAPP.
In my career, I've been focused on working with companies that are going through digital transformation and essentially relying on more and more other companies to do what they do. That's what led me to found VISO TRUST, where we work with companies to help them use AI to automate their assessment and management of the risk of doing business with third parties.
Awesome. Your path to CISO, was that an intentional path, or was it happenstance?
That's a great question. I got into technology at a very young age. Serendipitously, I was tasked with computerizing my family's small businesses. It went from there with the creation of networking. I got into networking with the creation of the internet. I got into building e-commerce websites and starting to build applications.
I really got into security when my wife eventually urged me to take a real job and go out of consulting. I got a job working for the city government. There, I was tasked with security administration working on firewalls, network security, and other aspects of working on crime computers and putting the first computers into police cars. That's really where I got into security. I spent some time teaching security at the college level. Since then, I spent most of my career in the private sector building and running security teams.
That sounds like a nice, smooth trajectory as opposed to some of the other stories we've heard here.
It has been. It’s been very natural for me.
I'm going to put you on the spot here. We can edit this out if it doesn't work. I ask this question to my guests. Have you been a victim of a scam, fraud, or a cybersecurity incident anytime in your life?
I have been involved in numerous security incidents at companies that I work with, work for, or was contracted with. That's actually been one of the key learning experiences for me as a security professional. I feel like I was galvanized as a security professional working for a company called Bridger Commercial Funding, which no longer exists. It succumbed to the 2008 financial crisis, but it was a subsidiary of Bank of America.
I remember just a couple days after 9/11, the Nimda attack, which completely took down our company, and I was working around the clock to get the company back in business pretty much for a month solid after that. That was my first serious security incident from an operational standpoint, and through the years I have been involved with many.
What's the phrase? Either you have been a victim of a cybersecurity incident, or you don't know that you've been a victim of a cybersecurity incident.
That's for sure. Yes, they're happening all the time. Everybody's data has been breached somewhere. In terms of attacks coming in, it just only gets more and more in terms of my phone, texts, emails, and folks attempting, wriggling the lock on the door. There's just always more, and it makes communication mechanisms that much more difficult and inefficient for sure.
I was just talking to a friend this morning who had gotten his first “We've renewed your $399 Norton Antivirus subscription. Please call us if you didn't do this.” Luckily, he didn't fall for it, but it was his first one that he can see, and was surprised by it.
Indeed. Like many of us, I'm always getting calls from family members. “Is this real? Should I click on this? I just clicked on this, what do I do?” It's getting harder and harder now with the ability to use AI to generate content for scamming, as well as even AI-based scam bots, if you will, which I've actually got a couple of conversations going on with a few on Facebook Messenger and via SMS on my phone, as we speak. Lots of that out there today.
The criminals are definitely quick to attempt to leverage any new technology they can to their favor, are they not?
They sure are. In some cases, they're ahead. In most cases, they're closely behind what commercial companies have available to them. In terms of the attack and defense, it's a constant arms race, constant battle. Unfortunately, for those of us as end users, not completely protected for sure, by any means.
Let's talk a little bit about how AI and how you guys are using AI to deal with cybersecurity and just risks as a whole. Risk is a pretty big pie, so let's talk through what you guys do and how that can help people.
Yeah, that's a great question. When I was just getting into technology and still was just starting to work with large enterprises back about the year 2000 or so, companies had their own data centers. They had their own infrastructure. They had their own computers. They relied on that. They typically had just a few outside companies that they might be sharing limited amounts of data with, or maybe just using services based on.
That has completely changed over the past 20 years. In the industry, we call that digital transformation. But what it essentially means is that there's a company doing every little thing right. At one point, you would deal with, “Hey, I'm going to hire a marketing company, or maybe then I'm going to hire a marketing company and a web marketing company.” Today, large enterprises rely on, sometimes, hundreds of marketing apps or marketing-related tools just to serve that one function, and then they rely on many more for other services.As a consumer, that means when you work with a company or interact with a website or a business, and you share your data, you're actually sharing it with a whole ecosystem of other companies. -Paul Valente Click To Tweet
As a consumer, that means when you work with a company or interact with a website or a business, and you share your data, you're actually sharing it with a whole ecosystem of other companies. For those companies that are responsible for your data that you shared it directly with, it's actually a very large and difficult challenge for them to manage the security of your data across all those different relationships. That's really where VISO TRUST comes in.
VISO TRUST uses AI to rapidly assess these companies in a way that puts the companies that you're sharing your data with in the driver's seat. It gives them the ability to understand the risk of different third parties before they start doing business with them, before they start sharing that sensitive data with them, and puts them in the position to make good risk decisions about who they're going to do business with and how they're going to do business with them. You can think of it as a critical web underpinning the trust of all these ecosystems, and that's why we call it VISO TRUST.
This is taking over the massive PCI compliance questionnaire type of thing that you get when you work with a Fortune 500 company. As you and I were talking before we started recording, I had worked for a company that was dealing with a large bank. While we weren't dealing with any bank customer information, they put us through the wringer of, “There's a plumber who walks through your front door. Have you done a background check on them?” We're like, “What are you talking about?” Because this was all new to us.
This document was massive. It was 60 pages of, “Check this. How often do you do background checks on your cleaning people?” We're like, “We're just doing websites.” This was crazy. It seems that it wasn't really built for the relationship that we were having with that bank. It was just this, “Here's this one process that we go through absolutely everybody with, even if it doesn't necessarily apply to you.”
Yeah, You're absolutely right, Chris. Historically, this process has been very, very difficult for companies to manage. Essentially, they've started out and largely maintained over the past 20 years. The status quo of trying to do at a distance, at arm's length, asynchronous audits where they send these questionnaires.
I remember I used to work at a company that was doing corporate social responsibility software in a web-based model for over 250 to Fortune 1000. We had a very small security team, but a very solid security program that we were heavily invested in, and I represented that program.
These Fortune 1000 companies would reach out to me with literally 3000-question questionnaires. I remember being in a situation where I had to weigh my options of, “OK, I can either hire another security professional to continue to build more security and protect everybody's data, or I can hire somebody to answer these questionnaires.” Unfortunately, I had to do the latter because that's what we needed to do to be able to stay in business. It's been a very, very broken process. That's what created the need for VISO. We're delighted to be able to be solving that problem for our customers at large.
Do you think that those massive questionnaires ultimately result in less security?
I think they are definitely ineffective at reducing risk and definitely often result in what we call a false sense of security. Now, companies usually encounter them on the receiving end when they're in the sales process with a large enterprise. They don't even always make it to the people that are responsible for security or highly informed about security. Sometimes sales professionals filling them out and saying, “Hey, boss. What do they want me to say to this question?” That can definitely lead to a false sense of security for customers.
For those that are diligent that do use their expert resources to fill them out, we have that dilemma that I just mentioned, where now they're not working on security. They're working on questionnaires and surveys. Yes, it does increase risk. It does create a false sense of security, absolutely.
I'm wondering how many of these types of instances does the salesperson just check the “Yes” box? “Oh, yes, we have that,” without even knowing whether or not that security element is in place? It's like, does everybody realize that in some aspects, some of this might just be a paper game?
They are highly incentivized to do just that, unfortunately. There's quite a spectrum too. There's a lot of situations where the conversation within a company is, well, we're trying to get the deal, and maybe this company is more responsible than what I described before. It's, “Hey, boss. I know we don't do this today, but if we get this deal, can we do this?” “Yeah, we can do it,” so they say yes, right.
The thing is, they do that a bunch of times to a bunch of things, then they say they get the deal, and then that work, those spends, those investments, which are significant, they need to be prioritized against everything else that they need to do. They may have had good intentions at the start, but do they typically get all those things done in the term of the contract or before you get started with them? Absolutely not. They usually don't.
My knowledge from actually living on both sides of this problem for decades, I've got that view to both sides. That's really what's helped us to take a different approach with VISO, where we're focused on removing the friction in order to make the process scale, because companies have so many third parties to deal with, and vendors have so many customers to deal with, but also making it focus on real information, artifacts of the security program that already exist, and extracting data from those in a completely automated way. Really addressing that scalability and those data quality issues that questionnaires bring.
Got you. You're not just accelerating the processing of paperwork, but in some sense vetting systems, making sure they're actually in place and doing what they're supposed to be doing.
Exactly. Instead of using questionnaires, we're focused on artifacts of the security program that already exist. It actually solves a huge problem for the vendor because they can provide us all the documentation that they've created internally for their program, all the evidence that they've created internally for their program.
Also, any materials from any, I guess you'd call it fourth party, outside audits, or third-party audits that they have engaged in, they can provide us, all that material. Our system will automatically extract information about the security program and turn that into a bespoke risk assessment for that relationship. It saves them a lot of time.
Today, many companies won't even answer questionnaires, actually, as much as 50%. For customers, it's a broken process. Our process takes the vendor five minutes. We actually get love letters from the vendor, as well as of course, putting the customer in a position to really make good decisions on risk.
That ultimately, I assume, helps the end users have a better level of comfort with the companies that they're working with in terms of, “OK, I know they're doing their due diligence. I know they're taking my data seriously and not just posting it up on the web for everyone to have access to, but really trying to do their best to keep everything locked down.”
Yeah, it's about creating a culture among companies, in between companies, and within companies, of transparency. That they're going to be open about security. They're going to be open and honest about their security programs, which even includes shortcomings, breaches, and things that happen because this is reality today.Security is not a solved problem. It takes constant diligence, constant effort, constant investment. There's always imperfections. -Paul Valente Click To Tweet
Security is not a solved problem. It takes constant diligence, constant effort, constant investment. There's always imperfections. Really facilitating that transparency between companies, which we believe also, to what you're saying, moves over or bleeds over, if you will, into how they deal with their customers as well, is absolutely right.
I'm trying to think through some maybe high-profile, over the last few years, incidents where major brands have had a security incident. Let's say it wasn't necessarily their security problem, but one of the third parties that they worked with. Some think the Target breach was that way, that one of their vendors was compromised. Are some of the other incidents that maybe we've heard about in the news, where it were third-party platforms that were accessed?
Yeah. Uber was recently affected by a breach by a company called Teqtivity, a little known IT firm that they were doing business with. Also, a little while ago now, but just something that was so eye opening for the industry, is the SolarWinds hack and breach, of course, which I think really changed the trajectory and understanding of what can happen for so many companies.
A little bit less on the consumer side, but also Okta had an incident not that long ago, which they actually handled very well with a lot of transparency, which I think was great from a credibility standpoint, but for folks in the technology sector, one that they were highly aware of.
I think that's always the challenge. You can get a good idea of what a company thinks when they announce their data breaches. What do they think about their customers? What do they think about their partners by how much they care to disclose to the public?
Yeah, you're absolutely right. I think this is one thing that I learned a lot dealing with breaches. There's really a variety of different responses between different companies. Companies can work with legal teams to essentially just build arguments why they don't have to tell anybody, and that does happen. There’s other companies that really believe in transparency.I think today, the companies that believe in transparency are ultimately rewarded. You're not just going to ruin your brand from having a breach today. Today, it's really how you handle it. That's what makes the difference. -Paul… Click To Tweet
I think today, the companies that believe in transparency are ultimately rewarded. You're not just going to ruin your brand from having a breach today. Today, it's really how you handle it. That's what makes the difference. I'm really glad that that's something that has changed in the past five years or so, I would say.
I'm starting to see that change in some of the vendors that I've worked with. It was a small, like, five-person or six-person company I was working with. I got an email from them saying, “Hey, you may have heard about this particular cybersecurity incident in the news. We use their product or service. I don't want to disclose too much information. We use their product or service.”
“While we don't believe anything has been compromised, here are the steps that we are going through. We expect for us to do this right. Because we're a small organization, it's probably going to take us about a month to work all the way through this list of things that we need to do, but we're going to keep you up to date on milestones as we go through the process.”
They actually have done that. I was like, “Oh, this is a good group of people who know what the risk was. They know what they need to do.” They communicate it in a way, which made me feel like, “OK, I trust that they're taking care of their aspect of this.” Nothing ever bad happened to them, but it was just the fact that they overly communicated and explained what they were doing made me feel like, “OK, this is a good company.”
I completely agree, Chris. It's not whether it happens to you or not, it's how you handle it, and essentially, how you take steps to keep your customers informed and help them to protect themselves. That's what really shows your mettle, if you will.
As opposed to the generic, “We take privacy and security very seriously, which is why we have to announce this data breach because we didn't take privacy and security as seriously as we should. It's someone else's fault, and have a nice day.”
Right. I won't be too specific, but a company called 247.AI, actually, had a breach at one point that affected lots of companies. You can find lots of information about that breach from those companies, but you can't find that much information about that breach from the company themselves. There's just not that much trace.
They really didn't go public with it, but it's public. Everybody knows because there were hotels, there were airlines, there were all sorts of companies that were impacted. There’s really different ways to go. There are good choices and bad choices to make, for sure.
Speaking of AI, when we're recording this, and we won't go into this route, but just for a time marker here, there are a bunch of people that are worried about the risks of AI. What does it mean for society? Are we implementing AI too fast without understanding how it's used? You alluded earlier that cybersecurity AI is battling, so to speak. Where do you see the risks of AI intersecting with cybersecurity?
That's a great question. I think starting with the consumer angle and how we're all affected by it. I think all of us have been trained—at least those of us that have been trained on detecting scams, that are digital or phishing, vishing, or any of those types of things—we’ve been trained to look for anomalies. We've been trained to look for grammatical errors and communications. We've been trained to look for, “This doesn't look quite right. I'm not sure this is definitely from Bank of America,” that sort of thing.
Today, particularly with the release of things like ChatGPT and generative AI for images and that sort of thing, it's become so much easier for scammers. Today, somebody can create a phishing message that's perfect grammatically, very, very easily. A lot of those flags that we've been trained to look for, we're not going to be able to rely on anymore. The same thing goes with regard to any text communication, chat communications.
Many of us already understand that when we're looking at marketing messages, we're dealing with bots and AI. A lot of times, when we're getting support, we're dealing with bots and AI. Scammer is in conversation with you on Facebook Messenger, for instance, you're going to be dealing with bots that are AI. They're going to be getting better and better at simulating humans.
I remember, I got one not that long ago. It was just somebody reaching out that I hadn't spoken to in a long time, or that's what it was masquerading as, that was, “How are you doing?” I responded asking how they were. They told me about getting their vaccines. They're just really setting up things in the context of the pandemic that would have been normal conversation.
It gets harder. You've got to think another layer to think, “Would this person really be talking to me about this now? Maybe this isn't them.” We took a few messages before I could pick up on that. Like I said, it's going to get harder. That's the consumer angle, I think.
Also from a consumer protection at large angle, we have to push for the creation of services and the ability to discern real information from misinformation. Obviously, I think, from just a little while pre-pandemic and through pandemic, that's been a focal point for everybody. But I think what it means really is that we're going to emphasize real news sources that much more because we're going to have to start thinking that just about everything is fake that we get digitally.
Whether it's images, whether it's video, whether it's text, unless you can easily attribute the source and get to the facts, you're going to have to assume it's fake. It's a big, big challenge for humanity, for sure. It's not the end of the world.
Yeah, I agree. I don't believe it's the end of the world. It's a new set of challenges. It's the end of the world in the same way that when the internet came online, it was the end of the world. When automobiles came out, they were the end of the world if you owned a buggy business. It's just going to have to change the way that we think and the way that we interact with one another.
Yeah, you're absolutely right. It's a new set of opportunities too. Now there's a chance for companies that can watch AI, that can monitor AI, that can create guardrails that can build trust. For instance, for us as a security company, when we evaluate third parties, we have to evaluate their ability to build trusted AI when they're doing their AI companies.
Are they going to make sure that AI models don't leak information? Are they going to make sure that they don't hallucinate? For folks that might not know what that is, if you're playing with ChatGPT and you ask it something, it might tell you something that it knows that it found online, or it might tell you something that it made up because of what's likely from a language standpoint.
They do have the ability to be creative. It's very hard for you as a user to discern which. Did it just make this up, or is that actually true? There’s definitely risks and opportunities, for sure.
I've heard the expression that ChatGPT is always confident. It's either confidently right or confidently wrong. It's your job to figure out which.
Exactly. When I started using ChatGPT, which is really at the same time as everybody else, while my team is highly aware of all these sorts of technologies on the consumer side, I was not particularly focused on that one myself until around the time that everybody was. But I actually was delighted by the transparent aspects, by the fact that there’s disclaimers everywhere, and that they try very hard to educate folks on the limitations. I think it's just so critical that folks understand those.
The concern is when people start integrating the API into platforms that don't have those guardrails and don't tell you, “Hey, we're actually using ChatGPT to answer these questions, but we're just not going to tell you that.” It results in, unfortunately, people interacting with an AI when they don't think they're interacting with an AI, and they don't understand the limitations.
You're absolutely right.
I guess maybe it's worth talking a moment on the AI that you use for assessing risk is not a conversation, or is not the same thing as a conversational, “Hey, produce me an article about AI hallucinating.”
Yeah, you're absolutely right. In the AI space, we talk about this as the problem space. How big of a problem are you going to try and solve? We all hear about folks trying to create sentient beings, essentially, with AI.
That's been what AI has been thought of for many years by science fiction. I don't even know if it's really technically a robot, but there's the robot from Saudi Arabia that folks have said was a sentient being. Obviously, that stuff is completely false. There's no such thing as that.
What that means from an AI point of view is, if you want to apply AI to a problem, the whole concept of human language and human experience is that's a really big problem space. You have to know really a lot to be able to do that. That's not possible today, not even close.
When I worked at a company called ASAPP out of New York as the Chief Information Security Officer, we were focused on customer service automation, which is for particular verticals like airlines, telecommunications, and banking. Particular finite problem spaces from a language standpoint. We found that we could deliver very, very powerful, very, very useful results, both directly with interactions to customers, as well as augmented interactions that go through agents, making agents into super agents, if you will, via an augmented experience.
It's all about shrinking the problem space and about access to large amounts of data. This is the same approach that we've taken at VISO. We're focused on the automated interpretation of what we call security-related language—language that describes security practices, security controls, security assurance routines and technology. We build models to evaluate that language, which is something that we've had great success at and that's been very powerful.
At the same time, we use a supervised machine-learning model with an expert in the loop. We produce results that are trusted and verified. We use real third-party risk auditors to essentially train our models, label our data, and make sure that our customers get completely accurate information that we stand behind.
It's specialized AI versus general AI.
Yeah, for sure. There are aspects that we look at like reading summaries or things like that, where we may use conversational AI here and there, and large language models. The majority of what we do, and that differentiates us as a company, is using stuff that we built that's very specific to our problem space.
You can't go out and tell your AI to hack into the FBI in the way that Kevin Mitnick would.
No, I can't. Thankfully, I can't, and I won't.
Not to be flippant, but do you see a day where AI starts to become not sentient, but a utility risk, where people who don't know how to hack can use an AI's ability to go out onto the internet and do things?
I do think so. The good news is, that's going to happen incrementally. A little at a time, the different aspects in what we call the Cyber Kill Chain, will be automated or made more intelligent over time. It's definitely happening already on the social engineering side, leaps and bounds in just the past few years. I expect to see it happening in a more technical way as well.
Actually, in some ways, there’s more complexities there to handle. Again, we are going to see that more incrementally, but we are going to see that too, for sure.
Instead of script kiddies, it will be AI kiddies.
Yeah. You heard it here first, folks, from Chris Parker. AI kiddies.
All right, I'm going to go out and register that domain name right now.
Yeah, there you go.
As we wrap up here today, do you have any advice for people who are looking at this space and going, “AI risk, oh, my gosh. What do I need to know? How does this impact me? What should I be doing about it?”
Yeah, a great question. I think educate yourself. Go to places like Wikipedia. Look for things. Search for things around trusted AI. Ask the bots that you're dealing with, where you know you're actually dealing with a company, whether they're bots or not. Keep an awareness of what you're doing and where you're interacting. Think twice about, of course, sharing your information.
You're going to have to look harder on the social-engineering side. When you get messages, you're going to have to be more careful. For folks in industry right now, obviously, with the SVB incident, there’s lots of wire fraud opportunities.
Voice verify, outbound voice verify every transaction. I think a lot of that goes for consumers too. Anything where somebody's asking you for money, always outbound voice verify. You make the call. You identify them before you take those steps.
Yeah, double-check everything. If people want to learn more about you and VISO TRUST, where would they go?
Thanks for asking, Chris. So go to www.visotrust.com. You can also email me at [email protected]. A great resource is looking us up on LinkedIn. You can look at VISO TRUST profile on LinkedIn as well as myself, Paul Valente as CEO and Co-founder. Lots of exciting stuff there. We'd love the chance to interact.
Awesome. We’ll make sure to include all those links in the show notes to make it easy for people. Paul, thank you so much for coming on the podcast today.
Chris, thanks for having me. It's been really delightful. Thanks to all your listeners for listening.