Experience the ultimate flexibility with the Isolation API, allowing you to securely Quisque pellentesque id ultrices lacus ornare elit vitae ullamcorper. Learn More

Wondering how to incorporate AI into your OSINT framework? We asked the CEO and founder of the popular training organization, OSINT Combine. 

Key takeaways

  • How can OSINT researchers leverage generative AI?
  • Tips for spotting disinformation in an AI world
  • Teaching ChatGPT your company datasets

About Chris Poulter

Chris is the Founder & CEO of OSINT Combine who are leaders in open-source intelligence training and software with support to counter-human trafficking, counter-terrorism, law enforcement and other strategically important efforts around the world. He spent over a decade in the Australian Defense Force with extensive operational experience around the world. He has an established background working in military and law enforcement environments domestically and overseas, with proven outcomes in developing open-source intelligence capability to multi-national teams that service strategic and operational objectives where the cyber, human and physical terrain overlap. Additionally, he is the creator of NexusXplore, a leading OSINT software platform that is trusted globally in supporting complex mission sets in the open-source environment. 

References from the show

OSINT Combine
Nexus Xplore
Australian OSINT Symposium
Connect with Chris

 

Chris Poulter
And I think one of the biggest things is we need Centers for Excellence to start pushing these things out, educating people, perhaps just having leaders in the industry that can go and educate around ethical use, risks and challenges. Just so we get comfortable and we can kind of not put our head in the sand, but really embrace what the reality of this is, because it's not going away.

Jeff Phillips
Welcome to Needlestack, the podcast for professional online research. I'm Jeff Phillips, your host.

Aubrey Byron
And I'm Aubrey Byron, producer and co-host. Today we're discussing how OSINT practitioners can leverage generative AI in their research and what analysts need to know about emerging AI technology.

Jeff Phillips
And joining us for that discussion is Chris Poulter, founder and CEO of OSINT Combine. Chris, welcome to the show.

Chris Poulter
Thanks for having us. Really appreciate it.

Jeff Phillips
We really appreciate you joining us. So Chris, to start us off, can you tell us a little bit about OSINT Combine and what you all do?

Chris Poulter
Yeah, so we're a OSINT capability provider, at the end of the day, that looks to develop enduring OSINT capability across strategic organizations. We have customers all over the world. We look at providing training, software and services around very strategically focused organizations. And at the end of the day looking at how do we make an impact, a better impact in the world, make the world a safer place.

Jeff Phillips
Well, for our topic today, it makes a lot of sense to me that since you're in the training and the software space, that you and your organization would be one that dives into the newest technologies and what it means for researchers. For sure.

Chris Poulter
Yeah, absolutely. So, I mean, when we start looking at generative AI and taking a pulse check on where does this technology sit in the future or today, and also the future and how is it going to evolve and what does this sort of mean, it really is an exciting time. It's a challenging and potentially scary time for how this all sits and how this works. But the challenge for us is to get comfortable with what that emerging technology is so we can use it appropriately, ethically, whether it's safeguards or how do we actually get the most out of it, because there is a huge potential in it as well. And I think that's the challenge is first trying to understand it and then sort of work towards the betterment of use of that technology.

Aubrey Byron
So you've given webinars on generative AI and what ways are you teaching researchers to use this technology to their advantage?

Chris Poulter
So, probably two ways. I think the first one is to lower that barrier to entry of understanding about what it is, taking that higher level approach or that higher level understanding of this is what generated AI is. This is the differences, this is what it can potentially mean. Diving into some of the risks, the challenges, the limitations, and just, again, having a look at where it sits today versus where it might evolve tomorrow. So that's sort of one approach and then the other approach is other areas we focus on teaching is, it's not all about what online systems are available. It's how you can actually start to use some of your own data. Because the key thing I start to focus on is, we're going to change and we are changing our behaviors of the way we interact with data. And that is something we need to get comfortable with what that change was. And we went through that change with how we do search. Search has evolved over time. Search engines have sort of obviously evolved how we access and find information. That behavior shifted from going to the library, finding the book and working through it to, obviously, search and validation and verification challenges of what that means, because obviously, anyone who can control information online or put information online and run a narrative of their own, to now where we're actually kind of lowering or we're bridging the gap.

Chris Poulter
And as strange as this might sound, we're bridging the gap with nontechnical people being able to interact with data in really sophisticated ways by asking complex and human like questions that allows them to go and sift through it and not have to be experts in data science to get meaning out of complex data sets. So I think overarching is really looking at behaviors and what that's going to mean for the future.

Jeff Phillips
It's interesting how you talk about that, Chris, because well, especially people that don't follow technologies, you see AI as either being we're all being replaced and this is a terrible thing coming down the path. And then there's others that are probably maybe more so early adopters in the way you're talking about it, that there's a role for AI, for the analysts. So I shouldn't have job security anxiety. It's a tool, if you will. Is that kind of how you see it and how you're talking to the analysts about it?

Chris Poulter
Absolutely, yeah. I mean a tool in the toolbox, but also how we're again, going to really change our behaviors of how we access information in the future. And it could be as acute as, hey, you go and use this system, you ask it a question, or you start to interact with it that way it could be as subtle as it's running in the background. You're not aware of it, but it's changing the behavior of how you're accessing complex information sets. And if we look at the evolution of change right when people were talking about job security, what that sort of means, who's it going to replace, it's not as simple as that. And you look at manufacturing when things went from handmade to machine produced in production lines itself. What resulted was it might have replaced certain industries in certain areas, but those people had other functions that were just as critical to making that succeed. AI is no different in that if we shift away from a physical world to the digital world, when we're working in an environment where people can now, we're going to lower the barrier or bridge the gap between people being able to access and ask questions of complex data, we need analysts and people like that more so now than ever to challenge it, to understand it, to obviously keep us relevant in terms of the efficacy of that data. But we need to jump in and start somewhere. And I think it's an important discussion to have.

Jeff Phillips
One thing I see a lot, people are sharing a lot of things on LinkedIn. It seems like everyone's all about training AI. And what are some of the prompts you're using? Is that part of what you're out talking about and talking to analysts about? How do you do you have tips for training AI from an OSINT perspective to fit your needs?

Chris Poulter
Yeah. So probably two ways. If you look at training it from the sense that you want to train it against your own data to build your own large language models, that's one aspect. So then you can interact again with your own data in a certain way, in a human-like way. And that's kind of where most people are enjoying what the benefits of this are, because you can turn mundane tasks or you can again access data by asking a normal question that you would otherwise poses as your research question now actually becomes the way you interact. Whereas before you had a research question, you had a data scientist and someone that go or an analyst and they go and work through the data. You kind of bridge those two things and you put the analyst on the other side of it who starts to then understand what's come out. So I think it's interesting when we talk about tips on the training side of training data versus prompt engineering and how do you interact with the data to get the most out of it. And I think that's an evolving thing. The way it is now, and the way you search and ask questions now and the answers you get back will be significantly different to how it is tomorrow, the next day, because it's unsupervised, self-supervised, learning all those things.

Chris Poulter
It's evolving and growing, which itself is a scary thought, on what does this mean from its understanding of the data and the questions that we're asking. Because every time we ask it a question, particularly if you started throwing things like colloquial tongues, what do all those things mean as it evolves? And we saw that with Translate. When Google Translate started, it didn't handle colloquial tongues very well. Now it's phenomenal because every time someone uses it, it gets better. And so I guess my point on that is that it's going to be evolving and we kind of got to stay relevant in understanding it. The tips I would say is use things like chain of thought sequencing. The same way when you interact with another human, you'll pose a question or a debate, they come back with an answer. You might challenge that. So always challenge what comes back. You always need to validate and verify the information and then start to work through that. But also just think about how you can reframe a question. So if you look at it from a human interaction perspective, when you ask a question of someone and they don't get the question or what it is, if you reframe it in a different way, you're going to get potentially the answer you're after or you're going to tease out bits of those information that you need.

Chris Poulter
Working and interacting with generative AI is no different. You just need to work out effective ways and reframe things to get higher efficacy in the results you're after.

Aubrey Byron
So we're talking about the benefits, but on the flip side, there are a lot of concerns about how AI could fuel disinformation with the advancement of deepfakes, for instance. How do you see that playing out so far?

Chris Poulter
Yeah, it's a huge challenge on multiple fronts. You can talk about it from a macro perspective, geopolitical level, the ability to start to influence populations at mass and what that means for society. You can then get down to a micro level and you look at even things in the child abuse space and the ability for people to generate imagery which can then be distributed online, where the questions start to get asked from a legal perspective, is there a victim? Of course there's all these different answers to what that is. So there's a huge problem there. It doesn't mean the technology is going to go away because obviously it's like anything, there's good and bad. Our ability to challenge that continually will be the back and forth that will go on forever. The other side to that is looking at distribution pipelines. So there's huge online social media platforms and you got disinformation and misinformation from that perspective. But now you can generate such realistic content and if you start putting it in things like closed message groups, you're reinforcing an echo chamber at a hyper level. You think about it, you're in WhatsApp groups with people of similar interests.

Chris Poulter
So people's ability to influence those with really realistic content, which before they might have had to wait for something to come in or it kind of didn't seem, really didn't hold weight. They can now really hyperactive or hyperactivate disinformation, disinformation and misinformation through closed echo chambers that then leads out to it. And you also can't source identify on the back of that because of obviously where it came from, where it started, and how that sort of works. So I think huge challenges, huge risks, and that's one of the key ones about an analyst working and understanding the relevancy of the information, the credibility, all that sort of stuff. And so now an analyst is more important than ever because of those challenges.

Jeff Phillips
Well, then it puts a lot of, I'm thinking about for the analysts. So keeping up with all of these things, keeping up with all the technologies, that's not their actual job, right? The investigation, whatever the case may be, that's the job. How do analysts keep up with all of this and what's going on with deepfakes, the generative AI? You just make your evenings, you just study 24 x 7, do your investigations eight to five, and study outside of that?

Chris Poulter
Yeah, look, it's hard, right? But there's new technology all the time, and there's things that we kind of anchor to and we enjoy. And so it stimulates our interest. This is going to underpin, again, our behaviors of how we interact with data. And data and information is the biggest commodity in the world. So this is one that you can't actually get away from. So I think the first step is one, understanding it, getting comfortable with what that might mean for your role. You will have a separation of a cohort or a demographic of individuals that won't want to adopt this, will really struggle and fight against it. But we saw that with Internet. You see that with open source intelligence. It's taken decades for it to actually be in the limelight of how valuable it is. Where the concept of it doesn't need to be secret to be valuable is now at the forefront of people's minds to the point where you can get 80% of what you need in the open source environment or the publicly available information environment to then go and focus your other efforts. This is kind of similar to that.

Chris Poulter
And I think one of the biggest things is we need Centers for Excellence to start pushing these things out, educating people, perhaps just having leaders in the industry that can go and educate around ethical use, risks and challenges. Just so we get comfortable and we can kind of not put our head in the sand, but really embrace what the reality of this is, because it's not going away.

Jeff Phillips
You mentioned ethical use. That actually prompts a question. I know the sensitive organizations that your company works with. So I think about policies and so people, whether that's policies on what you can do on the dark web or policies, are you allowed to use sock puppets? Is that coming up related to generative AI having organizational policies of what you can and can't do with it? Is that an issue yet?

Chris Poulter
I think it's something that every organization should be looking at, and that is everything from how do you set parameters and constraints within your organization for how they use it, but then also giving them the opportunity to leverage it effectively. So I think those policies and procedures are going to be a natural sort of thing that every organization needs and should be doing. I think the risk is particularly in strategic organizations who are kind of ignoring it, hoping that it's, oh, it's too much, we don't know how to leverage it. If you do that, you're kicking the can down the road and it will become a problem when you have that issue, that person that's used it in an unethical manner, and now your credibility is gone. And so I think you'll have organizations that will retrospectively have to go and fix that. But if you're a forward leading organization, you want to be having these discussions. To be fair, I think across, particularly in the government sector, there's a huge amount of work and a lot of education going into this, both open and closed door discussions on what is the reality of this.

Chris Poulter
So I think it is at the forefront. I don't think this is a problem. I think it's at the forefront. People are trying to understand it, but it takes time, particularly taking time to get people comfortable with it. So no, I think we're on a good path.

Jeff Phillips
Okay, that makes sense.

Aubrey Byron
Yeah. I can only imagine that a lot of SOC teams are playing catch up right now.

Chris Poulter
Well, everyone's so busy, right? Everyone's busy with their own. I mean, like you said before, Jeff, you're already doing all these other things and now you've got this extra thing that you need to start dialing into. But I think the reality is once we get comfortable with it, it'll create efficiency that will hopefully sort out some of those other areas, some other opportunities. It stimulates the mind on how we can work differently, which I think culturally can be beneficial for what we're trying to do, particularly in the open source intelligence space. There's opportunity there and we shouldn't fork with that.

Aubrey Byron
So shifting gears a little bit before we go, you're going to be at the Australian OSINT Symposium this fall and I just wanted to tell our listeners who might be in the area or just might have awesome travel budgets.

Chris Poulter
We're going to see you guys down here, right? So, yeah. Australian OSINT Symposium. September 12-14th. It's in Sydney, Australia. It's selling out quickly, I can tell you that much. It's going to be amazing. We've got open and closed door sessions. Open sessions is for everybody. Closed sessions is for government, law enforcement, military, all those sort of types. But international. So you can come down as an international if you're from those organizations. It's just an element of vetting. So we can have other discussions which are more sensitive. But still in the unclassified open source space, we have some great events, some great networking stuff. It's going to be amazing. It's all educational. It's very much focused on getting the message out and people sharing their experiences in the open source intelligence landscape. So yeah, please come on down. It's going to be amazing and the weather will hopefully be fantastic.

Aubrey Byron
Awesome.

Jeff Phillips
Aubrey, it sounds like we should do a podcast live from the symposium.

Chris Poulter
I like it.

Aubrey Byron
It's a great idea. We'll fix it.

Jeff Phillips
Well, thanks to our guest, Chris Poulter for joining us. I think this is going to be a topic of future podcasts for a couple of years to come at least. So thank you again, Chris, for joining us today.

Chris Poulter
No, thanks for having me. I really appreciate it.

Jeff Phillips
Absolutely. If you like what you heard, you can view transcripts and other episode info on our website, authenticat8.com/needlestack. That's authentic with the number eight, .com, slash needlestack. And be sure to let us know your thoughts on Twitter @needlestackpod, and to like and subscribe wherever you're listening today. We'll be back next week with more on how analysts can use emerging technology. We'll see you then.
 

More like this

podcast-icon
Podcast

S2E50 | LifeRaft: What security teams…
S2E50 | LifeRaft: What security teams need to know about ChatGPT

podcast-icon
Podcast

S2E40 | Take-home tips for OSINT…
S2E40 | Take-home tips for OSINT Curious researchers

podcast-icon
Podcast

S1E30 | How to perform a fact-check:…
S1E30 | How to perform a fact-check: from start to finish

Close
Close