The intersection of AI and cybersecurity is changing faster than anyone expected, and that pace is creating both incredible innovation and brand-new risks we’re only beginning to understand. From deepfake ads that fool even seasoned security professionals to autonomous agents capable of acting on our behalf, the threat landscape looks very different than it did even a year ago. To explore what this evolution means for everyday people and for enterprises trying to keep up, I’m joined by Chris Kirschke, Field CISO at Tuskira and a security leader with more than two decades of experience navigating complex cyber environments.
Chris talks about his unconventional path into the industry, how much harder it is for new professionals to enter cybersecurity today, and the surprising story of how he recently fell for a fake Facebook ad that showcased just how convincing AI-powered scams have become. He breaks down the four major waves of InfoSec from the rise of the web, through mobile and cloud, to the sudden, uncontrollable arrival of generative AI. He then explains why this fourth wave caught companies completely off guard. GenAI wasn’t something organizations adopted thoughtfully; it appeared overnight, with thousands of employees using it long before security teams understood its impact. That forced long-ignored issues like data classification, permissions cleanup, and internal hygiene to the forefront.
We also dive into the world of agentic AI which is AI that doesn’t just analyze but actually acts and the incredible opportunities and dangers that come with it. Chris shares how low-code orchestration, continuous penetration testing, context engineering, and security “mesh” architectures are reshaping modern InfoSec. Chris spends a lot of time talking about the human side of all this and why guardrails matter, how easy it is to over-automate, and the simple truth that AI still struggles with the soft skills security teams rely on every day. He also shares what companies should think about before diving into AI, starting with understanding their data, looping in legal and privacy teams early, and giving themselves room to experiment without turning everything over to an agent on day one.
“Agentic AI is powerful, but it can also act on your behalf in ways you didn’t expect. That’s where the real risk comes in.” - Chris Kirschke Share on XShow Notes:
- [00:00] Chris Kirschke, Field CISO at Tuskira, is here to explore how AI is reshaping cybersecurity and why modern threats look so different today.
- [03:05] Chris shares his unexpected path from bartending into IT in the late ’90s, reflecting on how difficult it has become for newcomers to enter cybersecurity today.
- [06:18] A convincing Facebook scam slips past his defenses, illustrating how AI-enhanced fraud makes traditional red flags far harder to spot.
- [09:32] GenAI’s sudden arrival in the workplace creates chaos as employees adopt tools faster than security teams can assess risk.
- [12:08] The conversation shifts to AI-driven penetration testing and how continuous, automated testing is replacing traditional annual reports.
- [15:23] Agentic AI enters the picture as Chris explains how low-code orchestration and autonomous agents are transforming security workflows.
- [18:24] He discusses when consumers can safely rely on AI agents and why human-in-the-loop oversight remains essential for anything involving transactions or access.
- [21:48] AI’s dependence on context becomes clear as organizations move toward context lakes to support more intelligent, adaptive security models.
- [25:46] He highlights early experiments where AI agents automatically fix vulnerabilities in code, along with the dangers of developers becoming over-reliant on automation.
- [29:50] AI emerges as a support tool rather than a replacement, with Chris emphasizing that communication, trust, and human judgment remain central to the security profession.
- [33:35] A mock deposition experience reveals how AI might help individuals prepare for high-stress legal or compliance scenarios.
- [37:13] Chris outlines practical guardrails for adopting AI—starting with data understanding, legal partnerships, and clear architectural patterns.
- [40:21] Chatbot failures remind everyone that AI can invent policies or explanations when it lacks guidance, underscoring the need for strong oversight.
- [41:32] Closing thoughts include where to find more of Chris’s work and continue learning about Tuskira’s approach to AI security.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Tuskira
- Chris Kirschke -LinkedIn
Transcript:
Chris, thank you for coming on the podcast today.
Thank you for having me.
Can you give myself and the audience a little bit of background about who you are and what you do?
Chris Kirschke
Sure. So I’m Chris Kirschke, I'm the field CISO at Tuskira. I generally have four masters here. Our sales team, I support when it comes to having the CISO level conversations and helping our customers build a strategy around how to leverage the Tuskira platform, the cybersecurity mesh in our agentic AI. I support our marketing team. I get to go to the fun events like Black Hat and RSA and various, you know, host dinners. And then our product team – what's the market doing, what are our customers asking for, what do we need to add to our platform? Bringing that feedback into the product feedback loop. And then our internal security. Whether it's SOC 2 renewals, ISO 42001, 27001, our overall platform security. So those are the four masters I serve here.

And so what is a field CISO versus a non-field CISO? I have never heard the term field CISO, so I'm really curious about what that means.
Yeah, usually a field CISO has no operational responsibilities for the actual security of the company.
Okay.
There will be, you know, a CISO for the tens to thousands of individual employees and systems and platforms out there. And then they will hire subject matter experts that have that, like, to be out and social and vocal and are good relationship builders. And those will be your field CISOs. And they are out there with the sales and marketing and the go-to-market teams primarily.
I like that. A CISO with a human touch.
A human touch. Just you swap targets, you know, you take the target from the attackers for the revenue target from your CRO.
Nice. So how did you get into this field?
By accident. Actually, when I say accident, more of a social experiment. I was a bartender. And my roommate at the time, he was actually a biochemist who decided to get out of that and get into IT. And this was 1997, one of the great vintages for California Cabernet. The folks at the bar were like, hey, you should do it. They all worked in IT. They were in various places. I was in DC at the time and they had a bunch of people in. They tipped enough and told them, you know, I went and got me an e-machine from OfficeMax.
Oh my goodness. I remember e-machines.
Yes. And an NT 4.0 workstation CD and book. And a week later, I was an MCP certified knucklehead that really didn't know anything other than how to install and uninstall NT on an e-machine. But at the time, it was the boom. And I was at a job fair, and I had a job at the Patent and Trademark Office two weeks later, and never looked back.
That's pretty neat. It is so much more difficult for people to get into cybersecurity now than it was back when we were younger.
And it's going to get harder, I think, with AI.
Yeah. One of the questions before we start talking about AI, because a lot of the podcasts, we do talk to counter-scam, counter-fraud experts and cybersecurity experts. I really want to destigmatize people that are victims of those. And so I want to ask you, if you have a story to tell, have you ever been a victim of a cybersecurity incident, scam, or fraud?
I've been part of plenty of incidents across my career. But yeah, actually, it wasn't more than six months ago. Bored on Facebook and, you know, I clicked on an ad, went to the website, bought the promotion. And not 20 minutes later, I saw a post from that vendor saying, it's a deepfake.
Oh, wow.
And, you know, they're imitating us on Facebook. It's frustrating because I paid with PayPal, you know, thought that was the right way. I go back, I filed a report. They're like, tough, sorry. It's a legitimate vendor. And I'm like, here's the actual post from the real vendor explaining what just happened to me and why this is not the legitimate vendor. And PayPal was like, no, legitimate checkout process. Tough.
So what about the, were there any warning signs in the shopping experience other than paying with PayPal?
You know, that was the thing, I'm pretty good about being careful. Man, AI just makes it so close. And I looked at the usuals, never saw a warning sign. Never saw a warning sign.
And so was the product purchase just kind of a, I was planning on buying this, or was it kind of a spur of the moment thing? Hey, that looks interesting.
Oh, no, it was a spur of the moment. Sounds like a great deal. You know, I'll click on it. Facebook. I mean, you'd think their safety group would do something.
Well, no.
Yeah.
I mean, that's a challenging thing. Like, what I find frustrating is that if you and I can't get it right 100% of the time, how is anyone who doesn't live and breathe this stuff, who isn't, you know, Peter Parker with the Spidey senses going all the time that aren't looking for every suspicious little thing. I can't fathom the discomfort just that everybody else goes through and the fear of, is this a legitimate site? I just don't have a way of figuring it out.
Yeah, I mean, like I said, I mean, it's the first time in 20-some odd years that I've gotten dinged and, like I said, I pride myself in being careful, but got caught.
Yeah, hopefully the financial loss was not too significant. Yeah. More of an inconvenience. And a frustration of like, ah.
Yeah. How did I do that to myself?
They got me.
So you kind of talked about AI there. AI is this really interesting thing to talk about, particularly the intersection of AI and cybersecurity. So can you give us a little primer of the of the past of AI and cybersecurity are rolling in together?
Yeah. You know, we I think we talk about this a lot. You know, the four stages, right? Four different evolutionary things in InfoSec. You know, the first was the web. Everything was self-contained. HTTP showed up and now everything was on the internet. Like, what do we do? Hey, let's build a web application firewall because, you know, Cisco firewall, you know, ACLs weren't enough. And it was mobile, you know, and that rose in, you know, BYOD, MDM, all that, all the neat stuff that the acronyms that Gartner created for us. Then it was cloud, you know, somebody else's data center, but you couldn't architect it like a data center because it didn't work that way.
And all of those, we essentially had no choice, right? Yes, we had to go to, you know, we had to adopt web. We had to go to support mobile. We had to test and try the cloud. I just showed up one day and a thousand people in your company were using it. Yeah. And you didn't have controls. You didn't have any idea of what it was doing. There wasn't the visibility that we needed.
So from the get go, we end up having to play catch up, you know. I remember at the very beginning, like, oh, you're that CISO that just blocked it. Well, what do you want me to do? I have no idea how to de-risk this. Yeah, but it's really cool, dude.
The world's best argument.
Yeah, like that's great, but you can upload a file to it and I have no idea where it's going or how to control what that content is. And now, by the way, it's training on it. I always laugh because like, oh, well, you can, you know, our data could be leaked in that foundational way. I'm like, Do you know how hard it is to go find a piece of data out of a model? Like the needle is harder in the haystack than that. But there's all kinds of other things where, again, I think that aspect of it was great because it forced everybody to truly pay attention right away.
I had a great conversation on this a couple of weeks ago. It forced the hygiene that we have been security's always kicked the can down the road, data classification, data mapping, data access rights. Copilot for Office 365 exacerbated this when they were launching that, because it used the old SharePoint permission model. Of which, you know, we all kept totally up to date and clean and hygienic, right? So you turn it on and all of a sudden people are like, oh, yeah, I can Google cool prompts to use on Copilot. And they're not efficient prompts. They're like, who else got a bonus? Who else got the following rating? What security instance do we have? Are we planning layoffs? Like the pertinent things that they think are important.
Yeah.
So I think from that perspective, it forced us to actually play offense for once. And then it also forced us to take a look at it. Because the business was coming back and saying, look, it makes us X amount more efficient. All the vendors coming and saying it's going to it's going to revolutionize the enterprise and, you know, the way your people work. Well, it forced us to ask the question of, well, how's it going to revolutionize how security works?
Yeah.
I think for the first time, we finally have an opportunity to look at it from less of a reactive perspective. And how do we leverage it to be a proactive perspective?
Have you seen a lot of AI implementation, on the pen testing side, on the offensive side happening?
Yeah. I think there are industries where it is very well suited. I think pen testing, I think there's multiple firms out there. I'll go back to having known Snehal and the Horizon3 team. They've led that charge. And I think that's a phenomenal path that they've let us down. We always used to laugh that you go through your SOC 2, you go through your PCI, like, okay, it's time to get that annual pen test on. And now you leave it, you just leave it in place. And now you're integrating it into other systems and platforms. Again, it's like, okay, I don't have to do this annually and it's stale in three days or the next change request that comes through. You know, it's always there and it's fed data that actually keeps it relevant.
And so, you know, one of the advances in AI, I guess, recently is agents being able to take AI and make it do stuff, not just talk to you. How has that impacted what you do?
Well, it's changed the game. I mean, I am not a developer. You know, I see Caleb at a lot of the CSA stuff and he's a developer, he develops. I'm not that person. When I was an engineer, I was a crappy script person too. You know, agents are different. The low-code platforms that now allow you to connect and build business process workflows and orchestrate the agents, like, it's game changing.
I mean, I can sit and operate my world out of that platform and the agents are going to do things and update things instead of going to, You know, we used to have the YAK, get another console and security, right? There is no YAK anymore. I do it from my centralized system, whatever that is, and I can orchestrate from there. And I think that's why I made that comment of like, breaking in is going to be so much harder unless you have that base skill set from the get go.
Do the agents themselves introduce risk?
Oh, horribly so. Horribly so. And I was talking with a bunch of CISOs at Intel CISO summit about this the other day. Like, how do you like that agent assumes, by definition, an agent is something that acts on behalf of somebody else. And I had a really interesting conversation with an HR person about this because she was like, I want to fire somebody for the work that their agent did. I'm like, what the heck are you talking about? She's like, look, that agent assumed had their identity and assumed a role. If they did that from their laptop and did something against policy or that was malicious, we would fire them.
Yeah.
So why doesn't the agent count too? And I'm like, that's wholeheartedly scary. And then I'm talking to a CISO of a very large manufacturing company in the tech space, and he's like, I would never, ever fire anybody for that. Like, I want them experimenting. We have guardrails, we're using agents. We're not 100% sure that everything is accurate, you know. But we're not using it in mission critical systems. Right? You know, those are different. But all of this has been there.
Neither of those extremes seem reasonable.
No. And look, the nice thing is that the security industry itself rallies around, and they do try to solve the big problems in a community fashion. Now, there's always going to be Sandhill Road and the VCs and all the founders that are going to try to solve it for us and sell us something to do it. But I think you look at the open-source community, you look at the, I don't say open source, but more of the engineers that are actually building true solutions and giving it and just publishing it. And yeah, it's phenomenal to watch, I'll tell you that.
Are AI agents safe for laypeople to use in commercially available AI?
Yes.
So I'm safe to have AI to choose my platform and have it book my vacation for me.
I would say yes. Now, I just talked about getting scammed. So I would want to be, I would want to say like, hey, tell me when it's time for me to enter my credit card.
Yeah.
Right. But even then you can solve that problem as well, you know? But yeah, I mean, we just went to Oktoberfest in Germany and we did the entire thing using AI.
AI planning or AI actually doing the booking for you?
So booked hotels. Found us the cheapest airfares. What we didn't say is, okay, pick the one with the most common mileage reward program across the following six profiles, right?
Yeah.
We didn't do that. Like, we could have, you know, but I think it's phenomenal. It's an interesting time, that's for damn sure.
Yeah, my fear is maybe not the right word. My concern, the inkling in the back of my head is agents' ability to discern real sites from scam sites and things like that, is there going to be this crop industry of platforms that arise to take advantage of AI agents to get them to do things that we're not expecting or to do fake bookings. And we don't know because it's six months until we go on our vacation. We show up at the airport and, oh, we don't really have what we thought we had. All because people are working in the background not to target humans anymore, but to target the AI to get it to buy things.
I think you will see that cottage, it won't even be a cottage industry, it'll be a real industry. I mean, it'll probably have more ARR than, you know, most solid startups do, you know, and it'll be a bunch of kids out of some wacko country or a living room somewhere, right? But yeah, I do agree. And I think, you talk about the human in the loop. Everybody wants to go there to make sure. I think there's, you can't wholeheartedly just trust it.
Yeah.
Right. You can't just say, okay, hey, go plan me a vacation. Here's my credit card. Let me know what you got done. Right. There is still some aspect of responsibility and human in the loop that has to happen. But yeah, I look forward to that.
Yeah, I mean, like to me, the AI that I look forward to, I think, is the AI that doesn't exist, at least not so reliably today. I want the AI in my glasses to say, this is Chris. You met him at this conference. His wife's name is this and the kids and you guys went out and had a beer at this bar and the beer that he got was really crummy. He hated it and would much rather have this. And that could book my vacations and work in his assistant like that. If in the place of trust, but verify, do research for me, give me ideas, and then let me do the heavy lifting.
Well, I think there's an interesting dichotomy there, and that's context, right?
Yeah.
Before I open my mouth, what context do I need to have to formulate the next sentence, right? And I think that's where, if you look at a lot of the various technologies or companies that have come up, it's around the ability to use context that normally humans had to go get on their own or figure out on their own. And that's made us move exponentially faster. And I think we're early stages in the context engineering space as it pertains to security and infrastructure. You're seeing a lot of innovation there.
I think the cyber mesh architecture that, you know, that Gartner's now acronymed again, you know, it's, you've got to lay that foundation correctly. And I think we're having a lot of success around that foundational build. But you're seeing a lot of organizations, you know, transition from traditional SIEM to then security data lake to now the context lake, you know. And how do we, and then starting to figure out how do we leverage all that context? Yeah, I think it's a phenomenal time right now.
Yeah. And it's a kind of a bit of the wild, wild west, which there's a certain amount of excitement to that.
Yes.
It's not well established. It's not telegraphed, well, this is clearly the pathway everyone's going to go and everyone's going to do, and it's just a matter of who does it better. It's we don't know what the best path is yet.
No, I don't think we do. I think we're so early in this journey. You know, some Ph.D. in mathematics and AI is going to break some pre preset rule of AI and be, okay, let's do this again.
You know, someone is going to realize we don't need 500 terawatts of power to run AI.
Because it runs on the moon.
We'll just beam the power back to Earth, beam the results back.
Yeah. Above my educational level.
Yes. Do you see a point, like, I've always kind of wondered, like, so when it comes to – before we started recording, we were talking about the Cloudflare incident and Microsoft Azure absorbing a massive botnet denial of service attack. And I feel like in the last five years or so, there have been organizations, government, private organizations that have worked together to take out botnets to, hey, we're just going to go in and patch these systems so they can't be used and we're going to use what the hackers opened up and we're going to close the door and walk out behind. Do you see a point where AI starts to become that way and we start to use AI to go out, find the botnets, repair them, repair these platforms that people are launching attacks from and, you know, proactive counter hacking? We haven't figured out the terminology yet.
Yeah, I don't think, look, the carriers have been notorious about not getting in the way of the pipe, right? Unless you want to pay them for it. I don't think we're going to see it at the carrier level. I think we'll see it at the CSPs, the major infrastructure providers. I think in the enterprises, we're already starting to see a lot of that. I listened to a CISO not too long ago say, I don't care about code level vulnerabilities because we have agents that run behind and actually fix the code. And I'm like, seriously? Like, yeah, I mean, yes, we scan it. The scans get fed to the agents. The agents go rewrite, you know, redo the code. I'm like, that's pretty ballsy. Yeah. He's like, double edged sword. Now I have a bunch of lazy developers who don't care about writing secure code because my stuff runs behind them and fixes it.
And you hope it fixes it.
Well, I think it's got to go through QA still. So it was like, I expected to get caught elsewhere and they expected to get caught as it moves up the tears. And as of right now, they're positive in its success. And this is not a small shop. Probably talking, you know, 50, 60,000 developers.
That's a little scary. Like that seems very, uh, Wild West is not the right terminology, but very cavalier almost. But maybe it'll work.
Yeah, I until it doesn't. I mean, you've got us, you've got to put your guardrails in to verify, but if it is, then it's only going to do better as it learns and trains.
Yeah. It's AI is such an interesting, such a such a fundamental game changer on so many levels.
It is. I mean, it's crazy. You talk to more people than I probably do. What do you think the craziest implementation of AI out there that you've seen is?
I wouldn't even know where to begin because it's just in the, someone just comes up with an idea of something totally random, not random, but totally off the top of their head and says, let's apply AI to this and make AI do it. And I think I'm still back in, you know, I wouldn't consider myself a Luddite, but I think there's part of me that is, I don't want to get my hopes up that AI is going to be as game-changing as people want it to be. I worry about the, not necessarily the financial bubble, so to speak, but that we've, so many people and entities are putting all their hopes and dreams in AI of this will solve all the problems that we have. You know, food distribution, oh, AI will solve it. Cybersecurity, AI will solve it. Geopolitics, AI will solve it. And it's like, I'm getting nervous about this. I get nervous about this reliance on a tech that I don't know has been proven out long term.
Yeah. Look, we're always going to need the people that provide the experience and the thought leadership and the actual doers, I don't think we're ever going to get to this point of hey, go babysit the AI, right? I don't think we're ever going to get to that point. I think at least not in my lifetime. You know, I hope, but whatever, I just still believe in people, and I still believe in their ability to take their experience and take their subject matter expertise to a point that's farther than what AI will do.
Yeah, I tend to believe that AI will become, in the long run, AI will become a good support platform or tool or instrument, whatever you want to call it, for the things that we do. But humans will still drive the concepts and the direction.
Yeah, it's like security. It's funny. It's always been a relationship business. You've got to go build relationships with the business, understand how it makes money, who makes it run. who actually makes it run, not at a layer eight level, but, you know, at the actual functional level. And I think that's like at Tuskira, we talk about this, moving the muck work over to our AI. And it's not just so you can save money or headcount. It's actually so you can have your teams focus on the relationship aspect of InfoSec. Instead of hammering people about going and fixing vulnerabilities, show them what to fix, why to fix it, and how to fix it in order to reduce the most risk in the fastest manner possible. And then give them back the time to go work with developer teams on secure design coding, secure coding principles, working with the architecture teams on better patterns. I think it's a force multiplier more than anything, in my opinion.
Has your experience been that AI and InfoSec has helped kind of prioritize and address biggest weaknesses first in a way that maybe people can't? Or does it still people are better at determining that's the one that would, that's the issue that we should address first?
I think it depends on the problem you're trying to apply it to. So if it's binary, like, say, AppSec, you know, it's either vulnerable, or it's not. It's either a code vulnerability, or it's not. If you rewrite the codes the right way, it's no longer vulnerable, right? ‘Til the next library gets broken. When you have that binary relationship between good or bad, then AI is phenomenal. When there's a soft aspect, it's horribly bad. It's a good guiding, you know, give you content, some content, a lot of context, but you're still relying on either the soft skills, negotiations, et cetera. Like, I can have it write a risk into my register with all of the context. But there's no way it's going to be able to accept the risk. So I think from that perspective, I see it as a force multiplier more than anything, and the ability to get teams focused on what matters most and more focused on the relationships in the business.
I like that. I think that's a good perspective to have. AI is not the answer to everything. It's just another tool in the tool belt.
No, there's so many soft skills when it comes to security. I mean, we did a mock deposition, which scared the bejeezers out of me, because that person was like, they're doing the mock deposition and they're like, oh, this is the nice guy version. I'm like, are you kidding me? He's like, oh, no, my record is 36 seconds before being able to say no more questions and then turn to the jury and say, obviously the CSO is incompetent. I'm done. And I'm like, 36 seconds. I can talk around the problem longer than that. He's like, nope.
But then, yeah, you take that same outcome and you take the lessons learned from it and you say, how do I actually leverage AI to make sure that I have the details that give me the ability to make it through a deposition without sounding incompetent? And that's that bridging that I think security is going to always have to be there, but leveraging AI to make sure, because it's a situation that you end up having to go through when you're in security. Because there's always HR and lawyers. So yeah, I love it.
We're not going to replace HR with AI.
My HR is not listening. So I would say I'd replace HR with, you know, anything.
There's a love-hate relationship there.
There is. I mean.
And that's true for most people. We love HR when it works to our advantage. We hate HR when it foils our world domination plans.
Yeah, my favorite. Yeah, my dad ran HR for major corporations for his career. So I always have a funny relationship with HR. But and I swear and, you know, and I'm gruff. So, but Bottle Shock, the movie about the guys at Chateau Montelena in Napa back in the 70s and the tasting of Paris and two sons, not in the house, get the boxing gloves, go outside. When you're done, you're done. To me, that should be HR.
So as we come in for a landing here, for people who want to, you're at a company, you're not in cybersecurity, and you want to start integrating AI into your corporate workflow. Because I think that's probably the perspective most people are having. What are the guardrails that people should be thinking of when starting to apply AI within their organizations?
To keep in mind, yeah. So my first recommendation, anytime you start, remember what AI is. What is the new oil? It's data, right? So go partner with legal privacy, your data teams, and figure out what data is allowed to be used first. Make sure it's classified, labeled, et cetera. Put the guardrails, something like Pablo, if you're using open source. There's plenty of commercial solutions out there. but make sure that you have your data clean, right? Because the last thing you want is, oh yeah, you know, so-and-so and so-and-so just decided to go and access EPHI data and shove it into an agentic workflow. And you're like, dude, that's an incident.
So I think from that perspective, starting with the data first, that's your foundations. Figuring out what guardrails, develop them, and there's plenty of architectural patterns out there based on how you want to use or deploy agents themselves. MCP is going to be most likely your layer of serving. So figure out if it's going to be off your CSP or if you're going to run it in-house. You know, the Cloud Security Alliance, we're building the six pillars of MCP security right now. Check that out. Join and help.
But then understand the use case. More than anything, understand the use case. Is this truly going to help? And when I say help, define help. You know, go to your marketing person, ask them, how do you define ROI? Go to your BizOps person. How do you define ROI for the CSPM that I bought for all of our security investments? Get some level of Excel-based spreadsheet that shows the ability to prove ROI. And then do it yourself. Do not hire anybody. Do it yourself. Figure out how to screw it up first, fix it second, and then deploy it correctly third.
Yeah, I've seen lots of public instances of poorly deployed AI where someone just didn't think about it. We want to replace humans. And we didn't think about what happens when it runs across a problem that wasn't it runs across a problem that it doesn't know how to solve. What does it do? It just hallucinated.
Oh, dude. Kurt Siefried over at CSA, he did an internet crawling. It was like seventeen or it was like some god awful amount of MCP servers that he discovered on the internet. And it was a nightmare. He's like, yeah, most of them, API keys, password tokens, all that stuff stored, hard coded, found it. No authentication. I could just use it. I'm like, this is not the state of the world we want.
No, no.
Yeah.
And make sure you know what it's doing before you set it free.
Yes.
I forget what airline it was. It was one of the one of the airlines decided that we're going to have our first tier support is going to be a chatbot AI. And when customers would start asking it policy questions, if there wasn't a policy, it would just, well, here's what most airlines would do as a policy in this thing. Of course, they wouldn’t tell the customer, most airlines would do this, and they would just say, well, here's the policy.
Yeah.
The customer would go out and follow the policy and then, okay, go back to a real human and say, I followed the policy. I want this. And the person would go, well, we don't have that policy. Yeah. Oh, it's not our problem. The AI lied to you.
Yeah, we're, I know we're wrapping up, but I talked about that one. Go look at all the terms and services. If not one of them will take responsibility for what the AI does.
Yeah. That should be alarming.
It's on us.
Don't give it access to your 401k.
Nope.
It might buy marshmallows for you.
Yes. Make up stock symbols.
We just invested in marshmallow stocks, doesn't everybody?
Yeah, exactly.
Chris, if people want to find you and Tuskira online, where can they find you?
Chris Kirschke
Tuskira.ai, linkedin.com/in/Kirschke or /company/tuskira. Our favorite platforms. Or come join us in any of our webinars or just reach out. I'm always happy to talk.
Awesome. We'll make sure to include all those links in the show notes. Chris, thank you so much for coming on the podcast today.
Thank you, Chris. I appreciate the opportunity.





