MATT ASHBURN

Welcome to NeedleStack, the podcast for professional online researchers. I'm your host Matt Ashburn, and for once, I've never lost a SOC in the clothes dryer.

JEFF PHILLIPS

And I'm Jeff Phillips, tech industry veteran and curious to a fault. Today we are kicking off our series covering the SOC, or security operation centers, as well as cyber threat intelligence. And to get started, we're joined by Rob Fuller. Rob is the red team and Cyber Threat Intelligence Director at a major US airline. Rob has substantial experience in running red teams, including going back to his time as a marine, and he's also a founding member of NoVA Hackers, and served as a senior technical advisor for the sitcom Silicon Valley. Rob, welcome to the show.

ROB FULLER

Thanks, thanks for having me.

JEFF PHILLIPS

Thank you. To get started, can you tell the audience members who may not be familiar with what a red team does, define it and what it does, and then also because of your background, does it function differently at all in the commercial enterprise space versus when you were in the public sector and with government?

ROB FULLER

Sure. First, I just wanted to say I don't believe you Matt on not losing a SOC. It actually never happens, and if it did, that would be a Thanos snap moment. So I don't believe you. But from a red team point of view, the definition is all over the place, depending on who you ask. So this is my definition of it. So red team is essentially—and we might get to what they're supposed to do or what they are doing—but red team is essentially—dates back quite a few years. Some things I've read put it back into the Crusades even, some places talk about it all the way back to or just with military, US military. I can't really find a real good set in stone this is when it came about, but essentially, the idea is you have a "red team" that goes up against a "blue team," and the red team is made up of people from the blue team, traditionally from history. So a military unit would take a portion of that unit and put them as the "bad guys" so that they can test to see how well they're trained, right? You can only train so much. You can get to the point where you feel that you are proficient in something, or everyone in your unit would act a certain way but you never know until you come under fire, right? The famous Mike Tyson quote, every plan survives until you get hit. So what those red teams are there for is to test those defenses, test those reactions. So in the cyberspace, that's very much the same thing. So a red team is designed to go against the blue team in a very collaborative way, where they are talking about the strategies and everything that they're doing and how to improve upon. So, general sense, that is what a red team is designed to do, from a vocabulary point of view.

JEFF PHILLIPS

The red team is acting—again, since we are talking about cybersecurity, it's like an external threat actor, if you will, and trying to test the defenses?

ROB FULLER

So when you say threat actor, there's a lot of—and this is where the definition gets muddy a little bit. So a lot of people think that a red team is supposed to simulate real threat actors, define threat actors. So when red teaming came about, right, there was no such thing as a threat actor, right? Then so cyber came along and we got MITRE ATT&CK and had more definition, Mandiant put out things, APT29, and all this stuff where we had actual definitions of threat actors, and the definition of red team kind of started getting muddied where you're supposed to, you know, only act like APT29, 57, 65, whatever. And when you say threat actor, what I mean by that or what I hear from that—I'm sorry—is a threat actor, not APT25-5, whatever. But you're supposed to act like a bad guy, a criminal, just like they did supposedly in the Crusades or military-wise.

MATT ASHBURN

Yeah, essentially play the part of an adversary and look at the security posture from their point of view?

ROB FULLER

Right.

MATT ASHBURN

Interestingly, you also run the CTI side of the house as well, right? So can you talk a bit about how that differs in the workflow and actually how those two can sometimes work together?

ROB FULLER

Sure. So, from cyber threat intelligence, the focus there is intelligence analysis, right? You're supposed to take in all of these different pieces from threat feeds, from news articles, and you're supposed to have the knowledge to then apply it to the business impact via cyber or via strictly business, financial or otherwise. A cyber threat intel team may find out, "Hey, there's a new restriction on oil and gas from Ukraine, Russia, and they're attacking these types of companies that do these types of things for oil and gas and they are using these tactics." So we take, as a cyber threat intel team, we'll take those tactics and then move them into our response team and our security operations team to identify what can be either found if they've already attacked us or detections can be put in place for those things if they're not already there. So, how I—and this is actually getting into an exciting piece—how I have seen Red Teams and cyber intel teams work together is having intelligence teams build out profiles, not of attribution and threat actors, but targets. One of the things that a red team isn't always perfect at is figuring out what's important to the business, what could take a company down, what kind of focus a company needs to have to continue doing business. And an intel team, one of the things they have to know is what affects the business, right? That's their job is to know from a cyber point of view or bigger, what that effect is on the company and how to operate on it. So, if they supply that, that targeting, to the red team, then the red team can then focus and say, "Okay, high impact if..." Let me just give you an example of something we did in history. So at a previous company I worked at, there was a specific individual that had 80% of the patents that this company had. They were a high level engineer, like a senior fellow engineer, but they weren't a CEO, they weren't an executive, they weren't anyone that a traditional criminal threat actor would be on a target list for. But they definitely would—that person leaves or gets fired or gets attacked, a lot of data is going out the window, a lot of impact on the company for years to come because they would have had all the research that that company was doing for a very long time. So not even on the radar. So the intel team put a package together and said, "Hey, this is this person's daily life. This is what they access on a day-to-day basis. This is the pieces that they have access to inside the company." And then the red team took that and then developed a scenario where they went after that individual, and not intentionally pulling apart their life, but from a cyber point of view, how many ways that a threat actor could go after that because that's the real impact on the other side, where real criminals and real organizations, depending on the country of origin, they have different motives. So China is after intellectual property, so a Chinese actor would go after this person. So for the red team, essentially, did the engagement around that and put more protections in place for that individual. They went from being just a standard, like a high level engineer, which didn't really warrant any particular insight from an attacker point of view or from an instant response point of view, from a security operations point of view, and then we added them to the high value targets list, essentially is what we had, where any anomalous action on that's person was then detected, just like we had for all the CEOs and executives and all that stuff already. We started adding people into that because of this type of engagement. That's a long winded, sorry. That's how intel and red team cannot work together.

JEFF PHILLIPS

That's super interesting. Diving a little deeper into the threat intel team. You mentioned they have access to all kinds of data, different feeds that this information coming inbound. Do your analysts ever have to conduct any online research external to the company and about external threads? I ask that because a lot of our audience around [inaudible 00:10:50] and doing online research. So my question is, if you have to go out as a threat intelligence analyst and engage or look for information, are there any best practices or protections that your team looks to follow?

ROB FULLER

Sure. I'd like to swing this a little bit to the red team side of things. So from a red team point of view, one of the things that I always really enjoyed was when instant responders would start acting on my things. I actually wrote a tool called VT Verify that essentially would look for VirusTotal, and as soon as my bad stuff was uploaded to VirusTotal, I would get notified. I knew when I was burned. So from a criminal threat actor point of view, there's a lot of these phishing emails and other things that go on that have very unique IDs for exactly who they sent it to. If you try and expand a URL that has been shortened, whatever site that you did that on or whatever tool that you use to do that is going to request those additional pages, like if it's Bitly or whatever that concatenated that string or that link down, it's going to request those pages to go and expand it further to find out where it goes to. That means that a request is going to go to the attacker to expand that. So me as an attacker, I know when that happens. Pulling this all back to your question is very, very dangerous to research, even if it's Google different pages or different links that attacker has sent. From a threat intel point of view, we come at this twofold. We have vendors that are very experienced in doing this. If there's a super sensitive thing that we want research, we have a couple of vendors that are really good at hitting the dark web and going and have profiles that are well established and can dig in and sometimes purchase credentials that are on the dark web for sale and do that for us, where that takes a lot more like backstory and history. But other research that is less sensitive—and we have playbooks for where that line is essentially—you can do on websites and things like that. So open source intelligence gathering is a sensitive topic. You have to know when you're tipping your hand versus when you need to go a much more sneaky route. That is a blurred line that needs some technical knowledge. There really isn't any way to get around that.

MATT ASHBURN

You brought up an interesting point that I think a lot of people may not normally consider, especially if they're new to CTI or SOC, and it's that threat actors or adversaries, whatever term you'd like to use, they know who they're targeting, if it's a closely targeted compromise. They know who they're sending emails to, they know who they're targeting. Many times they do in fact customize those links or those attachments and the subsequent malware for that individual. I'm thinking of one case as you were talking about that where a very high profile person within the organization was targeted, received an email with a link, that person clicked on it, whatever, nothing happened as far as the person could see, but then thought twice about it, and so contacted SOC, did the right thing. When he contacted the SOC, SOC went, looked at the same link, and nothing happened. It was just benign. Everything looked fine. The SOC in that case said, "Oh, you should be fine. There's no malware there, everything's okay." What that stock analyst didn't know is that it was a fire once piece of malware. Once that link was visited that one time with a fingerprint that looked consistent with a physical machine, not a virtual machine, it then deactivated the malware that was attached to the link and then inserted some benign content instead. That's a really important point that you make, and that's a really key thing for folks to keep in mind, especially if they're on the CTI side of things.

ROB FULLER

I'll do you one better. We had an engagement where as a red teamer, we sent an email off to a bunch of executives and a bunch of people as a phishing link. Very rarely have I consulted for a company where no one clicks the link, but on this one occasion, no one clicked the link. Everyone forwarded off to their phishing team the, Report of Fish button thing that a lot of corporations have these days. But the SOC clicked it, they opened it up to go research it. From the exact same host, we got a ton of clicks because it was this one person doing this inside of a VM, which he thought was a safe way of doing it. He loaded up the page, and for each person that had reported it, he was clicking to look at what was happening on each one of those hosts. It wasn't sandbox, it was just his own VM that he was doing the research on. We got access to that VM, and he wasn't reverting it every time. And because we're inside of that VM, we didn't do anything on that system but that system was not disconnected from the network because it was on his work machine, on the corporate network. We started expanding from the SOC analyst virtual machine. It takes a lot to set up a safe environment to do something like that.

MATT ASHBURN

Yeah, it does. The need to click things seems to be ingrained in everybody's personality for some reason. They really want to click things. I'm thinking back to, it's probably like Defcon 13 or something back in 2003. I'm probably showing my age a bit here. But at the time, phishing emails and that type of thing wasn't a viable attack. It was just not heard of back then. It was more focused on the perimeter, getting through the IDs, IPS, or Firewall. But this really ooey-gooey intranet that you can get to was really very few security precautions. So there was a security researcher from SensePost, and I remember this because it had a big impact on my career, actually. Security researcher from SensePost, he was like, "Why are we going after the IDs, the IPS, the Firewall? Those things are hard, and that's really tough. Why don't we just go straight to the user and have them click something that will then give us access as they have it." In his presentation, I remember that he was doing a red team of a penetration test of a bank, a very well known bank that he was hired to go and test. He thought he would try out this fairly novel attack method at the time. I remember him saying that they disguised this attachment as a screen saver, "Hey, we're from the bank. Check out this new screen saver. It's themed, it's branded, whatever." Sounds pretty boring to me. But apparently one guy—I remember this—clicked it three times trying to download the thing. He just kept installing the malware over and over and over again because he really wanted that free screen saver. So there's some very silly things that people will click to view or try to install or access, and I say that because that's almost 20 years ago now, it's about 19 years ago, and those same attack methods are still super successful today. Human nature really hasn't changed very much in that regard.

ROB FULLER

You were getting me very close to a soapbox. I don't know if you want to be on.

JEFF PHILLIPS

Step on up, Rob.

ROB FULLER

I don't believe in security awareness training. I think that it is not—it is not their job to be a security professional. They're a forklift driver. They're a finance person. They're an HR representative. I don't believe that users should know what a phishing email looks like. I don't. I think that we as security professionals need to do a better job at making it so that they can click whatever they want and it won't do anything.

MATT ASHBURN

I agree. My pet peeve, actually...I think it's okay to give them training, here's some bad things that can happen, but what really rubs me the wrong way is when you try to instruct users to "don't click a suspicious link." I've seen that probably thousands of times in my career, that guidance given out by SOCs and security professionals. "Tell your users, don't click a suspicious link." What the heck do they know what looks suspicious? If it was suspicious, they probably wouldn't click it anyway. It's really unhelpful advice that many times professionals give folks.

ROB FULLER

Agreed.

JEFF PHILLIPS

As the non-practitioner here, I'll echo that, that it's not often super helpful to me. That urge to click.

ROB FULLER

Yeah, it's also really bad to fault them for it, too. Like, "Oh, you click our test email or test phishing security awareness thing. Now you have to go through 4 hours of..." You know what they're going to do with that, right? Like everyone, everyone on the planet, I think HR is the only people who actually go through their own training, and maybe not even then, but everyone's going to click next, next. Us as a red team at a previous company, we found that you could just do a slash certificate at the end for our corporate training and it would just finish the training for you.

MATT ASHBURN

That's great.

ROB FULLER

No one does corporate training the way that it's intended. Why do we expect the security awareness one to be different?

MATT ASHBURN

Yeah, the punitive response is also one that really doesn't sit well with me. I've seen all spectrums throughout my career, but the ones that really stick out as being unhelpful are those where you have a CSO or CIO that says, well, they didn't pass the security training or they didn't pass the penetration test where they clicked the link that looked pretty convincing, frankly. But that's fine, we're going to punish them by revoking their access to the internet or revoking access to something or shaming them in some way. That is really counter, I think, to security interest completely. It's not the way to go, and frankly, it's probably trying to compensate for their own failings when it comes to their security program, because to your point, Rob, the security should be in the background functioning, the users should just be able to use things and not have to worry about whether they are using it appropriately or not.

ROB FULLER

Exactly. It's exactly like finance. I should not have to worry that finance is going to pay me my paycheck.

MATT ASHBURN

When you said, I was thinking an ATM. I was like, yeah, you're right. I go to the ATM, I put my card in, I put my PIN in, and I get money. I don't think about the security behind it because I'm just a user of that machine and it works as far as I'm concerned.

ROB FULLER

Anyways, sorry. Soapbox done.

MATT ASHBURN

Sorry. We've taken up some time here with SOC stuff. That's my thing, so.

JEFF PHILLIPS

No, that's awesome. You know, I did want to step back, Rob, in your intro. I had mentioned you had founded a group called NoVA Hackers. I know you're involved when we were talking earlier in another group. I wanted to ask you about some of those. What is NoVA Hackers? What have you gotten involved in and why do you get involved?

ROB FULLER

Oh, man. So NoVA Hacker started in 2009 after I saw Defcon groups, or Defcon... I think it's called groups. The Defcon groups, they met all over the world. When I first started getting into infosec in 2005-ish, I wanted to get involved. I learned about... Went to ShmooCon for the first time, wanted to get more involved in the community because this felt like home, I loved the industry. And that's past tense, I still love the industry. I really wanted to get involved in my local group, so I went to a DC 703, I think it was our group, and they just sat around drinking coffee and there was no conversation. Everyone was government employees and they were just unwinding from their day. I went there expecting, "Hey, this is going to be like a little conference. I'm going to talk to people and they're going to have stories. I'm going to learn tons." Nope, there are three meetings that I went to before I got fed up with it. I was like, these guys just want to unwind, which is cool, fine, but, like, it's not a Defcon group, at least in my definition. And so, roll forward to about 2008-ish, I start this infosec luncheon because I wanted friends in my industry. So I say, "Hey, anyone in the area in Northern Virginia that wants to have lunch, we're going to go to this Chinese restaurant." I set up a table for it, no one showed up. It was way out of the way. It was close to me at the Pentagon, but way out of the way for a lot of other people. I'm like, "Okay, no one's going to show up to this thing. Let's try and move it to somewhere more accessible." So the very first NoVA Hackers meeting was at L'Enfant Plaza in their underground food court.

MATT ASHBURN

Oh, that place is depressing.

ROB FULLER

It super is, it super is. I didn't know that at the time, but it was suggested that that is the place.

MATT ASHBURN

But a good location, though, for the crowd, I think.

ROB FULLER

Yes. I had 13, 14 people say they'd come. I'm like, "Cool. Awesome. I will get there early and I will save space." So I am sitting in L'Enfant Plaza at one of the restaurants that we picked out, saving like six tables for people. And if you've ever been to L'Enfant Plaza or any of the restaurants there, there are not that many tables. So people are looking at me, like, frustrated, they want their lunch. I'm just sitting there. I sat there for like an hour, no one came. And so, I'm like, "Last try. Third time is the charm. I'm going to go again." We do this one more time, and 13, 14 people say they're going to come. Again, I show up. I save four tables this time instead of six. One person shows up and he's late, but he shows up, and that's Chris Gates, my co founder for NoVA Hackers. He's like, "Where's everybody?" I'm like, "You're it." And after that, what NoVA Hackers came out to be after we started to actually have people showing up, was essentially a clone of AHA, which is Austin Hackers Association. And what this is, is a group that has a really cool way of doing things, I think, and that is if you show up, you have to present. It is a collaboration... And what is the word? Where you have to keep contributing. It's essentially a group where you have to keep contributing or you're no longer part of the group. So, you present, we have a six-month rotation now. Essentially within six months, you have to give another presentation, and it's Chatham House rules. So anything that is talked about there, you can use—sorta like TLP read, you can use it in your day-to-day, but you can't say where it came from or act on it publicly. And what's really awesome is that we accept—you can present on anything at NoVA Hackers. We had one person who was just getting into infosec talk about intro to Python, which there are plenty of people that went directly into infosec and didn't do any programming, so that was really interesting for them. We had someone talk about hacking their coffee machine, which was amazing, and making the best brew through a Frankenstein version of a coffee machine that connected to the Internet so you could have like—before IoT was a big thing, like an IoT device coffee machine that would start up in the morning and push a button and do all this stuff. So NoVA Hackers has had a myriad of talks, and it's been a really great experience, and I've loved building it. We have close to 700 members now, which I'm not the only one showing up anymore, which is awesome. It's been one of the most rewarding situations and one of the most challenging, too, because managing expectations and relationships of 700 members is really tough. We have a lot of pretty direct rules that basically don't be a butthead. That group has grown to the point where we had our own mini conference after ShmooCon called ShmooCon Epilogue. Yeah, I know, genius idea on the name. It's not really weathered the storm of the pandemic very well because it has to be in person, and the place that we used to meet at is no longer available. So that's that group. But it's been a really rewarding and awesome experience to manage that group, run, but also just sit and learn. We meet monthly, or did meet monthly. We're still kind of going, but we're figuring it out, monthly, every second Monday of the month. I learn every single time I go. There's stuff that is just from the myriad of experience. We had one guy dropped zero days and he was like 19 years old. He dropped zero days on RSA tokens. Literally sitting there talking about how you could exploit and figure out the number for an RSA token, and we're like, "Stop talking. Go sell this." It's like millions of dollars. You're 19 years old. I know that you don't understand what you're talking about, but there's literally people in this room that can help you out selling this for multi millions of dollars. Yeah. But like, that's the space NoVA Hackers operates in, from Python intro and coffee hacking all the way to RSA Zero.

MATT ASHBURN

Yeah, that's awesome, man. Great job on getting that up and going and keeping at it because it's a lot of work, as you said. So one final question, I guess, and this is a bit of a fun one maybe to close on. What was it like consulting on Silicon Valley, and any thoughts on representation in TV and media of security researchers?

ROB FULLER

That last bit might take a while.

MATT ASHBURN

Ignore that then, fine. How did you like the experience with Silicon Valley?

ROB FULLER

First off, I was super excited having the opportunity. A friend of a friend of a friend was how I kind of got on the show. I was ecstatic just to have the opportunity to do it, and then they sent the contract over and it was 1,200 bucks. I'm like, "What?" It's 1,200 bucks for the entire season. I'm like, "Okay, I guess that means I'm five hours here and there. Total, 10 hours, maybe?"

MATT ASHBURN

It's like one step above working for free for exposure.

ROB FULLER

That's exactly what this was, essentially. But what I didn't realize was that they were going to be recording well into the night on the West Coast. I'm on the East Coast, and they would call up at 3:00 AM. I was expected to answer at 3:00 AM, 4:00 AM, you know, whatever time and answer questions from a hacker's point of view, from an infosec point of view. This didn't happen a ton, but my wife didn't like it at all. And I'd, you know, get up at 3:00 AM, walk over to my computer, look at the lines that they had for that night, and they had rewritten or whatever, and I'm like, "No, that doesn't work out." So that was season 2. Season 3, I got to help build a character. So the hacker character on season 3, I had a heavy hand in about what they act like and who they would like, their morals and everything, the female girlfriend of Dinesh. And then season 4 and 5, didn't really get called at all, which I still got paid for the season, but there was no infosec or hacker stuff, so that works. But season 6 is hands down one of the coolest experiences I ever had. They invited me out to sit in the writer's room to spell out the entire season and how the show ended. Sitting there with Mike Judge and the rest of the writers, I will never forget that experience and there is no amount of pay that I would exchange for that experience itself.

JEFF PHILLIPS

That's awesome.

ROB FULLER

Overall, awesome experience. It didn't start out very well, but it was amazing. I got to go to TV premieres, too. I didn't know TV series had premieres, but I got to go to those. One of the cool things is that my seat for the very first premiere I got to go to was next to Jason Statham. He didn't show up, though. But I have a picture of me sitting next to the marker for his seat.

JEFF PHILLIPS

Next to the sign.

MATT ASHBURN

You get to sit next to his seat at least.

ROB FULLER

Yes, I got that.

JEFF PHILLIPS

That's awesome.

MATT ASHBURN

Rob, thank you so much for joining us today. We really appreciated the conversation, as always. And for those at home, you can always follow him on Twitter at mubix. That's M-U-B-I-X on Twitter, and you can also read more about him in our show notes. And as always, if you liked what you heard today, you can subscribe to our show wherever you get your podcasts, watch episodes on YouTube, and also view transcripts and other episode info on our website. Be sure to follow us on Twitter as well, @needlestack_pod is where you'll find us there. We'll be back next week with more on SOC analysis and some other steps you can take in a SOC. To register, you can visit Authentic8—that's authentic with the number 8.com/needlestack. We'll see you then.

In episode 20 we interview Rob Fuller, red team and CTI director for a major U.S. airline. Rob discusses how red teams can better leverage CTI, the grand mistakes of even the most security-conscious individuals, and why he thinks most security awareness training gets it wrong.

Key topics

  • Learn how threat intelligence teams and red teams can collaborate effectively to identify targets that otherwise may go unnoticed
  • Get an idea of rules of engagement for conducting OSINT on the surface and deep web
  • Hear the ultimate "doh!" in phishing investigations (hint: think twice, thrice, four times before you click that link, even as a SOC analyst)

About Rob Fuller

Rob has more 16 years of experience covering all facets of information security. He has been behind the lines helping to design, build and defend the U.S. Marine Corps, U.S. Senate and Pentagon networks — as well as performing penetration tests and Red Team engagements against those same networks. More recently, Rob has built and lead numerous Red Teams in successful engagements against many of the Fortune 50 companies, representing some of the best defensive teams in the industry.

Rob’s experience and expertise ranges from embedded and wireless devices in ICS/OT networks to standard IT infrastructures. He is a frequent speaker and trainer at a number of well-known security conferences. He has also served as a senior technical advisor for HBO’s show Silicon Valley.

Rob has acquired a number of certifications and awards over the years, but the ones he holds above the rest are father, husband, and United States Marine.

Where to find Rob

Subscribe
Enter your email address below to receive notifications from needlestack@authentic8.com
Close
Close