Experience the ultimate flexibility with the Isolation API, allowing you to securely Quisque pellentesque id ultrices lacus ornare elit vitae ullamcorper. Learn More

We’re handing the mic over to our friends at Talking Threat Intelligence, a podcast by LifeRaft today. On the show, they had our very own Daniel Ben-Chitrit on to discuss the security implications of ChatGPT, how it could be used by phishing scammers to up their skills and the safest way for OSINT researchers to prompt AI.

Key takeaways

  • Security implications of ChatGPT
  • “A force multiplier” for phishing scammers
  • The safest way for OSINT researchers to prompt AI

About Talking Threat Intelligence

Talking Threat Intelligence explores the intersection of OSINT and risk management. Each episode breaks down emerging threats that could impact an organization, insights into the latest technologies that can enhance security operations and tips for exploiting threat intelligence to keep your staff, assets and customers safe.

References from the show

Daniel Ben-Chitrit
On the flip side, in the places where they're locking down access to data, that's where disinformation and this type of technology becomes even more scary because I don't even have to have a propaganda guy sitting there thinking up all these things. They just add ChatGPT. What should I tell the Russian people about this war that's happening? And they'll tell you a bunch of stuff and you can craft it and you can use this to do some writing and to craft a narrative that sounds so realistic that people won't even question.

Shannon Ragan
Welcome to Needlestack, the podcast for professional online research. I'm your host and Needlestack producer, Shannon Ragan.

Aubrey Byron
And I'm Aubrey Byron, producer on Needlestack and occasional co-host. Today we're doing something a little bit different. Our friends over at Liferaft recently had Authentic8's own Daniel Ben-Chitrit on their podcast to discuss what security teams need to know about ChatGPT. And we thought it was a great discussion and that our listeners would want to hear

Shannon Ragan
That's right. Daniel is our Director of Product Management and he went on Liferaft to talk about the value and risk of the emerging AI technology marketplace and what the potential impacts of that could be in this very revolutionary era.

Aubrey Byron
In the conversation, we're going to hear Dan outline how ChatGPT can be used as a force multiplier for cybercriminals, scaling up the skills of even low level phishing scammers and making their attempts more effective. He also talks about how it could become a major source of disinformation.

Shannon Ragan
Yeah, it was also interesting to hear him walk through how protecting sensitive information, how to go about that when using ChatGPT and to use this to your own advantage, but mitigate the potential security risk while using these types of tools. So stay tuned as our friends at Talking Threat Intelligence discuss the implications of public versus private data sets, how to suss out disinformation and more.

Aubrey Byron
Here we go.

Robert Baillieul
Dan, thanks for joining me today.

Daniel Ben-Chitrit
Thanks for having me.

Robert Baillieul
What is the coolest application that you've seen for Chat GPT?

Daniel Ben-Chitrit
Yeah, so that's a good question. Outside of cybersecurity and there's a lot of cool applications there, one of the coolest things I've seen is there's a guy who distilled the top 100 books in different genres. Fantasy, romance, all of these things. Got the cliff notes, took all of those, fed that back into ChatGPT and started writing books. And they're using Amazon's print on demand to sell ChatGPT generated novels based off of the best selling works out there. Really cool concept.

Robert Baillieul
That is so cool. So you'd instantly have all the best ideas from all the best authors in literature and you just get to condense it into your amazing story.

Daniel Ben-Chitrit
Yeah, it's one of the real best use cases of language learning model ChatGPT, all of these things is to distill information and use that as your source of truth.

Robert Baillieul
Yeah, I've been playing around with this tech since it came out, and there's some great stuff like just summarizing things or creating sock puppets and stuff like that. There is some really cool stuff that you can do just instantly. Writing novels is one of them. Writing articles. It's a really fun technology to play out, to play with. Maybe just we'll just go back for a second for maybe the two people that are working in OSINT that haven't quite heard of ChatGPT yet. Can you just define what it is?

Daniel Ben-Chitrit
Yeah. So Chat GPT is OpenAI, which is a company, it's their language learning model, which is a type of machine learning model that trains an AI. So artificial intelligence, the big spooky skynet, that's the thing that's trying to do human things, right? It's trying to write code or do things like this that a human might be able to do. It's trained with something called machine learning, which is what it sounds like. You're making a machine learn how to do these things. And then there's a thing that's called deep learning, which is a type of machine learning. And specifically in that you have a LLM, it's a large language model, which basically is just taking a lot of data and using that as the data set for the machine learning of that AI. I'm really simplifying it, obviously, but that's kind of the core. So the data that you're training on is very large in these cases.

Robert Baillieul
And I don't know how technical you can go into this, but how does it know what word to come up with next during this process when it's returning a response to a prompt?

Daniel Ben-Chitrit
Yeah, so obviously, again, this is a very high level. I don't have a PhD in machine learning like some people out there, so I might mess this one up. But generally it takes all the data that it's collected and you're talking about terabytes, petabytes of data, I mean large amounts of data, and it distills into what it thinks a good response is. That's why sometimes you'll get gibberish or you'll get things that don't make sense. A key example of this is in machine translation. So if you ever put something into Google Translate and it gives you the wrong translation, something that just doesn't make sense, that's because it's using what it knows, but it doesn't have the full context. And that's where the large language model comes in. It takes a lot of language in, it uses its knowledge of what it's read before, "read", and it tries to make an educated guess. It's not always right. And that's where we'll get into some of the pitfalls of don't take everything ChatGPT says as truth, but that's where it's coming from.

Robert Baillieul
Got you. And where can you find ChatGPT right now?

Daniel Ben-Chitrit
You can find a lot of different versions of ChatGPT, but you can find it. Obviously, if you go to type into Google ChatGPT, you'll find the OpenAI version, you'll be able to actually see a lot of extensions that have integrated ChatGPT into browsers. A lot of security tools, non security tools now have "powered by ChatGPT", so it's just kind of built inside of it. But you'll also see outside of just ChatGPT specifically, these models in things like Bard, Titan, which is AWS's version, and a lot of other security platforms that are starting to build their own Purple from SentinelOne, et cetera. So the concept is actually being built into a lot of different tools. ChatGPT is just one version of it.

Robert Baillieul
Right, that's what I'm kind of excited about, to see where the technology goes down the road. You're using it right now with the prompt, and it's not that complicated to use, but I'll be kind of curious to see where the technology goes as you start using it in an application where it might be a little bit more intuitive to use for a specific purpose, whatever that might be, and seeing how the developers kind of take it to the next level. That's going to be kind of interesting to see where this goes down the road.

Daniel Ben-Chitrit
It's going to be really interesting to see how much it impacts our day to day life without just even knowing it. It's going to be powering things behind the scenes, not even going to be saying "powered by OpenAI" or "powered by Bard" or whoever. It's going to just be there and it's going to be impacting our life without us even knowing.

Robert Baillieul
Yeah, well, the two examples I'm thinking off the top of my head would just be like just misinformation online with the amount of stuff that you can put out now and then from a marketing perspective, just customize landing pages, customize advertisements that are tailored just for you as you're surfing. If it's not happening now, it's just a couple of months away.

Daniel Ben-Chitrit
Oh, for sure. This probably is happening now, we just don't even know it.

Robert Baillieul
Let's take this conversation back to because ChatGPT and AI, there's so many different ways that we can take this conversation. I want to keep it more on the OSINT and Cybersecurity focus. Can you tell me, why is ChatGPT, how do you see this reshaping the landscape for Cybersecurity or OSINT over the next couple of years?

Daniel Ben-Chitrit
There's so many different ways. I think back to the time of what people called script kiddies are still around, but people that would take other people's code. Now I don't need to go and find someone else's code. I can just ask ChatGPT to give me some write a piece of code that does XYZ. So that's obviously one piece of it. It democratizes, as they say, coding and the ability to do things with code that a lot of non technical people haven't had in the past. So that's definitely one. Another side is from on the offense side, the people that are writing those Nigerian Prince emails that always have all the misspellings, well, now they have something that can write them a perfectly good English ransom type note or whatever they want and send that out. And now it's a little bit harder to see. On the defense side, we have things like Microsoft's Copilot which is in the SOC, so it's using this type of actually, they're actually using ChatGPT and OpenAI's LLM to do this because they've invested a lot of money there. So that's kind of helping the SOC analysts. Instead of one person looking at one thing, they have AI powered analysts now

Daniel Ben-Chitrit
Then you go all the way to the OSINT side, which I think, there's a lot of pros and cons. You mentioned disinformation. So as I mentioned previously, don't take everything you hear from ChatGPT because it might not be 100% correct. And we'll get into that with like, poisoning of data. But the other piece, which I kind of talked about with the whole book idea is it's really good and it's going to be really powerful for OSINT analysts to use this to distill the information that they're collecting. I go to 100 different websites during investigation. It's almost impossible for me to within a couple of minutes, distill that into a report or tell me give the highlights of it. However, ChatGPT does it in seconds and that's what's really powerful for, and I think we're going to see a lot of benefit in the OSINT world around that specific use case.

Robert Baillieul
I was feverishly taking down notes while you were giving that answer there. And there's a bunch of different places where I can take this conversation next. There are three main bullet points I think was offense, defense, OSINT. So why don't we go into the first bullet point that you mentioned there, offense. Tell me a little bit more about what ChatGPT gives adversaries. What kind of capabilities does it gives them to possibly inflict damage upon the assets that we're trying to protect?

Daniel Ben-Chitrit
Yeah, as coming from a previous analyst, my previous job or jobs before, where the main goal of a defensive person is to make it really tough for the offensive person to do what they want to do so they waste more money. Time is their most valuable asset, and Chat GPT and these types of models generally makes it a lot quicker for them to do what they need to do. They don't have to sit there and think and think and think or rewrite, rewrite. ChatGPT will give them 15 different versions of the phishing email they want to send that are all completely targeted to the person that I'm targeting. It's not just spear phishing, but it's like extreme spear phishing at this point because I can get a lot of information out of ChatGPT in seconds, use that myself, or just have Chat GPT write those for me. So speed up with time is number one. The second is expand the technical prowess of maybe a non technical offensive person. Most of the time you'll see ransomware or phishing attempts that have PDFs which everyone knows don't click the PDF in the phishing email.

Daniel Ben-Chitrit
I can now make that a little bit more technical because I am writing something that is powered by AI and I don't have that knowledge, but the AI does. You take that a little bit further and you went to cybercrime, which I think is where the biggest damage is going to be done because we have good content filtering, there's anti-AI, spam filtering and all type of stuff. But the cyber criminals who are technical, who know how to use the technology in the best way, they're going to be able to expand ransomware as a service. Cybercrime as a service is what we're kind of starting to see in this space. And that's where I think the most power is, because they know what they're looking for, they know what they're doing. ChatGPT is going to generate 90% of what they need and they're just going to do the other ten, but that 90% is probably the most amount of time.

Robert Baillieul
So if I'm getting this straight, for the top skilled cybercriminals out there, this is just a massive leverage point, a force multiplier, and for the less skilled cybercriminals out there, this has just upskilled everyone overnight, essentially.

Daniel Ben-Chitrit
Exactly. And then the only other thing I'll just note, and this is more on the nation state side, I'll say, is the amount of disinformation that can come out of a machine learning algorithm like ChatGPT is the thing that scares me the most. And I'm sure there are a lot of people down the street in the White House and in DC, that are staying up at night because of the existential threats from a disinformation standpoint that will be able to come out of this type of technology.

Robert Baillieul
I want to put a pin on that disinformation thing and revisit it back in a second. But I also just want to talk a little bit about one of the things that I'm worried about with this technology is, like you mentioned previously, that upskilling that everyone suddenly has, I think, about something, one of the most obvious things is like writing. Whereas before, previously, maybe 5% of the population was professional level at writing. Now with ChatGPT, pretty much everyone overnight is now pretty close to a pro writer with just a little bit of prompts on ChatGPT 3, let alone ChatGPT 4, which I haven't had the chance to play around with yet. And so I can imagine it's the same thing. Whereas it used to take a lot of technical skill and a lot of time to learn a lot of the coding that you needed to be a successful cybercriminal. Well, ChatGPT, you've really lowered the barriers and I got to imagine that lower barriers to entry, more people are going to want to go into this.

Daniel Ben-Chitrit
Yeah, I'm sure we're going to see more people enter into the cyberspace, but I don't know how worried I would be about too many more people coming in. What I worry about is people taking advantage like that kind of that ransomware as a service, the cybercrime as a service. What I've done is I've democratized the ability to run a ransomware campaign. And for the cybercriminal, I can now do that more efficiently. I can charge less, which means me, the buyer of that bad thing, is going to pay less and it's going to make it easier for other people to do that type of stuff. That's where I think it's going to be the biggest. I'm sure we're going to see those script kiddies getting better, we're going to see some of those more mid level people getting better. But going back to the script kiddie comment, we worried about this over the last ten years in cybersecurity, besides the DNS attack, which is a completely separate thing, we really haven't seen a huge impact. I mean, nation states obviously are the biggest adversaries, but script kiddies, they're around, but they haven't really made a big impact.

Daniel Ben-Chitrit
So how much this actually impacts that type of adversary, only time will tell, but I'm actually not too worried about that.

Robert Baillieul
Got you. Let's go on to the second point that you mentioned in your original answer about how this will change the game for people on the defensive side. And one of the things you talked about was Microsoft Copilot. Can you tell me a bit more about how this is going to change the game for defensive analysts?

Daniel Ben-Chitrit
Yeah, so I think everyone who has ever worked in the SOC knows you get so many alerts, you have alert fatigue is what they call it. And this is where I think AI generally will be the most impactful thing for a defensive organization. If everything comes in and you have a good AI, a ChatGPT enabled analyst, for example, that knows what to look for, can see the patterns and alert you when something that they've seen before or they know about because there was a news article about it popping up. That's where I think it's going to really help on the defensive side. And the main reason why is because similar on the offensive, where time is your most valuable asset, time is also a very valuable asset on the defense side. And if you have a million alerts coming in a day, how do you prioritize? Well, that's where AI comes in. It will tell you what to prioritize. It'll tell you, hey, these six different things are similar. You should look at these six things next because maybe they're related. One of the things I mentioned earlier, Purple by Sentinel One, for example, it's an LLM model that they're using on top of there.

Daniel Ben-Chitrit
And this gets very technical into domain controllers and all these types of things. But you can start to ask it, what's my attack surface basically in a little bit more technical terms. And it will give you a report based on its data. So it's going to really speed up the efficiency of an analyst and that's going to make it easier for them to triage the most important things.

Robert Baillieul
That sounds like a very important piece of technology because I got doing an assessment of your attack surface is quite time consuming. If you just click in a couple of prompts and have that done immediately, that's got to imagine saving.

Daniel Ben-Chitrit
Yeah. Again, the more data you feed into, especially like one that's built for this type of stuff. ChatGPT is very general, but if you start to build your own models or cybersecurity companies like myself, if you start to build this into your platform and you kind of train it on the right types of data, it's going to become more powerful for those defense organizations.

Robert Baillieul
Got you. So it sounds like from almost between the offensive and the defensive side of things here, I'm trying to think of an analogy from warfare. It's like you invent the cannon and then the defensive people come up with thicker walls and you invent artillery and different types of cover and it's kind of this arms race. Do you have any indications now which side might have the advantage with the new AI coming out or does it seem kind of evenly matched?

Daniel Ben-Chitrit
So I would say the bad actors are winning right now. Now, it's not like the world's on fire, everyone runs for shelter. But I would say the bad actors are winning right now. And the reason why is because one, having worked at a cybersecurity company, both from developing a product but also from defending assets, it takes a lot of time to integrate new technology into a platform or into a standard toolkit for an analyst. For a bad actor, I just go to the website, I type in what I want and I get it out. There's no lawyer sitting over me saying, hey, what are you doing? Or hey, you can't do this or I can't do that, or I have to plug this in here. So it's a lot easier. There's a lower barrier to entry, as they would say, for the bad actor. So we've seen a lot of this. I'm sure everyone who's gotten a smishing or a phishing attempt recently has seen upleveling of those types of things. Personal data might be getting input into there. It's not just your CEO saying, give me a gift card. It's new, hey, I'm in the meeting with this person and we're at this place and I need this.

Daniel Ben-Chitrit
It's like, oh, that sounds like something more like what my CEO might ask me to do than just give me some gift cards from Target, or whatever. So that's one big area. The other I would have to say is in the cutting edge area. So vishing, if you're not familiar with vishing, it's voice phishing. Within 3 seconds of online audio. So this is going to be on, so they're going to have both of our voices. You can actually generate an AI voice and you can start to call someone, you can spoof their phone number, you can do all these things. And that becomes a lot harder because it's not just a robotic voice or a guy sitting in a call center somewhere trying to tell you that they're the IRS. Now it sounds like your sister, your brother, your kid, whoever actually calling you. And this is maybe more on the commercial, B to B, more B to C type stuff where the consumer is at risk here. But I wouldn't be surprised if that comes into the B to B space in the near future.

Daniel Ben-Chitrit
I would just—for any listeners that might not be familiar of the capabilities that this AI can do in terms of the level of sophistication that this is, I mean, check out the video, I think it was Joe Rogan, a fake video of Joe Rogan interviewing Justin Trudeau. And I'll include a link in the description of this and it is spot on. If I didn't know it was fake, I'd be like, wow, Trudeau was on the Joe Rogan podcast. That's surprising. But it was uncanny. And what this technology can do and any kind of voice prints and things like that, all that is kind of out of date at this point.

Daniel Ben-Chitrit
And that's kind of the whole point to my comment of the offense is going to be, the bad guys are winning right now a little bit, is people have to sit and think like, is that real? I just saw something and I have to stop and think, that doesn't seem right. But now I need to go and Google to see, did this actually happen? Now I trust Google enough to say like, hey, if there's a couple of articles that say it happened, then probably, but that might not be the case in the future, right? I might be Googling, there might be 16 different AI generated articles telling me that it was real, but it was never real. You start to have to question and that's where I think the uncertainty, even if they're not winning from a financial standpoint, or from getting ransomware or doing damage in some way, the fact that people have to pause, stop, think, I would say that's winning to a certain extent.

Robert Baillieul
Yeah, well, I had this experience this weekend. My partner's sister showed me a TikTok video of the game controller from the OceanGate submersible wreck on the bottom of the ocean. And instantly I was like, it looked really real. I was like, oh, that's an AI image. So we had a dispute about it. Whereas maybe six months ago, I might have been more inclined to see that image online as real. But now with knowing this stuff, it's just you got to stop so much more because it's so convincing, the images that you're seeing.

Daniel Ben-Chitrit
So it was another big thing over this weekend. I'm sure everyone listening to this has heard about what happened in Russia.

Robert Baillieul
Yeah, it's old news by time this comes out.

Daniel Ben-Chitrit
Yeah, I know. Exactly right. But when this is recording, that was the weekend. That was this weekend. Two days ago, there was some things happening in Russia, but all the stuff coming out of there, everyone, I'm in doing some OSINT stuff on the side, and everyone's asking, are these real? Are these videos real? Are these photos real? Because we didn't know. We didn't know if the data was coming out of Russia. I mean, we had people on ground, so we ended up verifying it. But initially when we first started to hear about like, this could just be all fake. This could be complete disinformation. That's the thing you have to stop and think now you can't just act.

Robert Baillieul
Yeah, well, and this brings me back to the point that you wanted to talk about a minute ago with the disinformation and the capabilities that this is going to give nation states. And like you said with that example, could that whole coup attempt had been just I mean, we know now it wasn't, but could it have been all disinformation right from the get-go? A couple of years ago, or for all history, really, with state sponsored disinformation, you needed a team of people writing this stuff, and it took a lot of work. Now, I could create some of these sophisticated campaigns with a few hours prompting.

Daniel Ben-Chitrit
Yeah, two sides of it, right? So there's the places that you have free press. I can go online, I can go look up all of this stuff, what's disinformation, what's not. But on the flip side, in the places where they're locking down access to data, that's where disinformation and this type of technology becomes even more scary because I don't have to have a propaganda guy sitting there thinking up all these things. They'll just ask ChatGPT. What should I tell the Russian people about this war that's happening? And they'll tell you a bunch of stuff and you can craft it, and you can use this to do some writing and to craft a narrative that sounds so realistic that people won't even question it, especially if they're only getting data from one side.

Robert Baillieul
Let's go into another thing that you wanted to discuss, and I thought was very important as well, were some of the challenges and limitations and some of the security risks that can come as a result of using these language models. Can you tell me a little bit more about that?

Daniel Ben-Chitrit
Yeah, I think the biggest thing is always with new technology, take a grain of salt. Have some amount of skepticism. I'm sure everyone's probably heard the story of the lawyer that used ChatGPT, and it came up with this completely fake precedent and he used it in a thing, and now he's getting censured and disbarred and all that type of stuff. Take it with a grain of salt, right? You don't want to be that type of person who's just going to take it as real. That's the first thing just to always keep in mind. But the data that is put in to these models is the most important data when using it. So one, take it with a grain of salt, it might not be true, it might just make up something, but when people start to realize that this is the data that's getting put into it, ChatGPT, for example, uses Wikipedia. I can start to maliciously make changes to Wikipedia, potentially, that ChatGPT will pick up. And now one, I could put disinformation there, but two, I can start to train it to do things to say, oh, this type of malware is okay.

Daniel Ben-Chitrit
And now I got in because some of you are using a ChatGPT or whatever, AI enabled web gateway. So those are the types of things you have to worry about, not just the data coming out, but the data going in. It's called poisoning, right? You're going to poison the data. Nation states are probably going to be the biggest people doing that, but even the small guy, if they know where to look or know where to change, they can do a lot of damage without even writing a single line of code.

Robert Baillieul
That's an interesting perspective. I hadn't thought of that because I'm aware that you can use the output and that's what a lot of the focus has been on. But I know a lot of data scientists have been talking about that this stuff's only as good as the data that's being fed into it. And so it's like, well, how can you play with that? I hadn't thought of that as a vulnerability.

Daniel Ben-Chitrit
It's one of the big reasons I say the biggest value at least now is using it to do summaries, using it to get reports, to do things where you're feeding the data that you want information about. You're not relying on its data store. ChatGPT is public right now. You obviously can get private instances. And same thing with Bard and Titan, all the other ones, you can get your private ones. Then you have to train it on your own data sets and takes a long time. If you use the public data sets, there's always a risk. But if you're putting data in and saying, tell me about the data that I'm providing you, that risk becomes much smaller because you're not relying on whatever someone else trained that model on.

Robert Baillieul
One other risk that has been talked a lot about is employees kind of putting sensitive information, or maybe personal information into the chat bot. Can you tell me a bit more about that and some of the concerns you have with that?

Daniel Ben-Chitrit
Oh, man. People sometimes, they don't think before they act. ChatGPT came out and I think the first thing that I saw an article about was how Toyota or someone blocked it because people started putting proprietary code saying, hey, can you, ChatGPT, optimize this code for me? And they're just taking code, like proprietary code, throwing it into an open model and getting some feedback. And sure it was good, I'm sure it helped, but now that code stored in the language model that I'm sure people heard, like ChatGPT has been breached. Now, I don't think they got any of that sensitive data, but they got the PII of people. So what's stopping them from being able to breach that? Now they have access to this Toyota code or whoever code. Right. So from a coding standpoint, obviously a lot of risk there, but PII, people put in information that they really shouldn't. I mean, hopefully people aren't putting in Social Security numbers and passwords and all these types of things. Wouldn't surprise me if some people have, but those are the types of things that you have to be very careful about. So what have people done or companies done?

Daniel Ben-Chitrit
They said, okay, blocking ChatGPT. Well, I can go on my personal laptop and probably put the same information in. Maybe I still have access to GitHub because I'm coding on the side and I have access to my company GitHub and I can't get that out. One of the things that we're kind of seeing zero trust in this world, just generally looking at what we're doing at Authentic8, zero trust application access is a product that we're using to stop people from being able to access this type of data on personal devices. So that does help you when you're on a personal device trying to maybe push something into ChatGPT that you shouldn't. On the corporate side, you can just block it. Now, one of the downfalls that I think people are going to start to see when you do that is, well, you're not using it, but your competitors are, and they're going to value from this. You can go the hard route of just, no, you can't do it, but it's better to train people and to say, hey, be smart about this. Now you got to trust people and it depends on what your industry is in, but that's kind of where you got to, there's a fine line there, I would say.

Robert Baillieul
I like that insight, Dan, because this technology is able to multiply people's talents by so much. And so if you're just saying, hey, security risk, we're not doing it, you've just instantly put yourself at a major disadvantage to every other competitor, which is in any industry, everyone's looking in all the different departments how to use this stuff. So I think that's a good point that you make. Just a blanket no is not a great long term solution. I think of, like, countries, like, I think Italy has just banned ChatGPT or something. I remember reading about that and I'm just like, you kind of put your whole country at a disadvantage with the technology, even though we don't know where it's going to go and stuff like that. I think that's a great takeaway. It sounds like if I'm getting your recommendation right, like, training and awareness is a far better option than just saying stay away from it.

Daniel Ben-Chitrit
Yeah. And I mean, if you haven't trained your organization about this, then share a block, right? Depending on what you do, there's an existential threat of people doing stupid things. Now, maybe it's not that big of a risk, and you want to do training while it's still open, but if you need a block, just block. But you shouldn't just leave it blocked. You should be focusing on training. You don't block email because people get phishing. You train people how to find phish and do all these things, and people learn. Now, if you have the sensitive of the sensitive code, and this guy is the only one that has access to it, yeah, sure, maybe block it on the one computer has access to that data, block ChatGPT. But generally, if 90% of an engineer's time is writing code that isn't really proprietary or they need to generate an extension, a great example is how many extensions have come out with ChatGPT, built in, tons. How many extensions have been built by ChatGPT? Hundreds. I mean, if I need to build an extension for my company and I just say, hey, give me a framework of it, that could save me, I've never written extension before, that could save me hours of time. And engineers, we always need more engineers. Giving them five hours a week of extra coding time where they're doing the real stuff that ChatGPT can't do or shouldn't be doing. Sounds good to me as a product manager.

Robert Baillieul
All right, Dan, we're coming up on all the time I promise to take from you today. What's the main takeaway you want listeners to remember from our conversation?

Daniel Ben-Chitrit
Yeah, there's a lot, obviously, but I think the first one is don't be afraid. New technology comes out every day. The world's not on fire. The world's not coming to an end. Don't be afraid. Embrace technology, but come with some skepticism. From a cybersecurity standpoint, the pros and cons, learn about them. From an OSINT standpoint, use this as a technology to help you get information, but more importantly, to distill the information into stuff that you can actually use for downstream analysis and things like that. But if you are taking data that gets generated from a model, take it with a grain of salt. Don't just take it as truth. Don't be worried to question, especially when you're looking at open source information, because you never know what's lurking behind the scenes.

Robert Baillieul
All right? And if people want to reach out to you or learn more about the work you're doing at Authentic8, how can they get in touch?

Daniel Ben-Chitrit
Yeah, so you can reach out to me on my email dbenchitrit@authentic8.com. We have a product, Silo for Research, which is an online investigation platform that allows analysts to avoid tipping off those targets that might be watching when they're doing that OSINT research. We have a free trial if you go to Authentic8.com. It's Authentic with an eight, .com, just like my shirt. You'll be able to find that free trial for that research platform and sign up. I think it's 14 days. So you can actually try out the platform, do all that type of stuff. But yeah, I mean, I'm happy to connect with people on LinkedIn. It's my name. Just search it. There's not many of the Daniel Ben-Chitrits out there, so I'm sure you'll be able to find me. Happy to talk via email, et cetera. So, yeah, just feel free to reach out.

Robert Baillieul
Dan, thanks for joining me today.

Daniel Ben-Chitrit
Thank you.

Shannon Ragan
We hope you enjoyed this guest episode from our friends at Liferaft. Thank you again to Daniel for being on as well. We'll be back next week with more on AI in OSINT. Until then, if you liked what you heard, you can view episode info and transcripts on our website at authenticat8.com/needlestack. That's Authentic with the number eight.com slash needlestack. And be sure to follow us on Twitter @needlestackpod and subscribe to us wherever you are listening today. We'll see you next week. Bye.

More like this

podcast-icon
Podcast

S1E32 | Practical OSINT tips from the…
S1E32 | Practical OSINT tips from the field

podcast-icon
Podcast

S1E30 | How to perform a fact-check:…
S1E30 | How to perform a fact-check: from start to finish

podcast-icon
Podcast

S2E44 | Hard-to-hear OSINT truths
S2E44 | Hard-to-hear OSINT truths

Close
Close