Experience the ultimate flexibility with the Isolation API, allowing you to securely Quisque pellentesque id ultrices lacus ornare elit vitae ullamcorper. Learn More

Mis- and disinformation abound on social media. But as violating users are deplatformed from mainstream sites, they’ve become hyper-concentrated on alternative social media where anything goes. We sit down with Dr. Welton Chang, CEO and co-founder of Pyrra Technologies, to discuss how AI/ML is helping researchers zero-in on disinformation, domestic extremism, hate speech and more, and understand how to counteract it.

Key takeaways

  • How researchers can leverage ML to zero in on content of concern
  • What alt-social sites to add to your research rotation
  • Resource tip: sign up for Pyrra’s free newsletter

About Welton

Dr. Welton Chang is co-founder and CEO of Pyrra Technologies. Most recently he was the first Chief Technology Officer at Human Rights First and founded HRF's Innovation Lab. Prior to joining HRF, Welton was a senior researcher at the Johns Hopkins Applied Physics Laboratory where he led teams and developed technical solutions to address disinformation and online propaganda. Before joining APL, Welton served for nearly a decade as an intelligence officer at the Defense Intelligence Agency and in the Army, including two operational tours in Iraq and a tour in South Korea. Welton received a PhD and MA from the University of Pennsylvania, an MA from Georgetown University, and a BA from Dartmouth College.

Where to find Welton

https://twitter.com/weltonchang

https://twitter.com/PyrraTech

https://www.linkedin.com/company/74302644/

[music plays]

WELTON CHANG

The Board of Human Rights First actually came to me on January 7 and the weeks thereafter and said, you've been building this technology inside of HRF. What's your plan to actually share with the private sector, with the government? Because clearly signals were missed.

[music plays]

JEFF PHILLIPS

Welcome to NeedleStack, the podcast for professional online research. I'm your host, Jeff Phillips, promoted this week from cohost and filling in for Matt Ashburn.

AUBREY BYRON

I'm Aubrey Byron, a producer on NeedleStack, and I'm joining Jeff behind the mic while Matt is away.

JEFF PHILLIPS

Now, today we're continuing our series on fact checking and debunking with a very special guest, Dr. Welton Chang. At the heart of a fact check or debunk is misinformation or the suspicion of misinformation. That's why today Welton is joining us. He is the cofounder and CEO of Pyrra Technologies, an intelligence solution for dark social media and the disinformation that resides there. He's also the former CTO and currently a Senior Advisor for Human Rights First. Welton Chang, welcome to the show!

WELTON CHANG

Thanks, Jeff. Thanks, Aubrey. It's great to be here.

JEFF PHILLIPS

Super excited to talk with you. Let's start off Welton, can you just tell us a little bit about how you got started in intelligence?

WELTON CHANG

Yes, so I came to this work from a career spent in the Army and at the Defense Intelligence Agency were between the two places. I served a combined close to decade and then went off, decided that I was going to skill up a little bit and went to the University of Pennsylvania, where I got a PhD, essentially in data science. And around that time when I graduated, everybody was talking about this information and its effects on the 2016 election. And so I thought to myself, what better way to kind of continue serving than to apply the skills I just got at Penn to the problem at hand? So wound up going to Johns Hopkins, the Applied Business Lab there, where I led a number of teams and projects building technical solutions for the US. Government to try to detect foreign interference on social media. And that's essentially how I got the start to really dive into what's going on online in these spaces and directly leads to the work I'm doing today at Pyrra.

AUBREY BYRON

And can you tell us a little bit about the work of Pyrra Tech and what made you want to start that company?

WELTON CHANG

Yes. So our origin story is not that typical for venture back startups. We actually got our we actually started out as a project, R&D project inside Human Rights First, where I was the first Chief Technology Officer, and January the 6th happened while I was in that role. We were building a system called Extremist Explorer at HRF, which was collecting data from small alternative social media sites as well as larger ones like Twitter and Reddit. And the lead up to the capital attack, we had observed an uptick in Shatter across these platforms, and the folks are streaming into the capital building. I remember at that point saying to myself that I think that we're going to be embarking on something new. And so the Board of Human Rights first actually came to me on January 7 and the weeks thereafter and said, you've been building this technology inside of HRF. What's your plan to actually share it with the private sector, with the government? Because clearly signals were missed. And so we started down the path of basically trying to think about ways how we commercialized technology, building an actual company from a prototype platform at that stage. And that's how Pyrra got to start in the space.

JEFF PHILLIPS

Well, that's amazing timing. Unfortunate timing, I guess, but that's crazy. Can you tell us a little bit about when we talk about disinformation? Are there different kinds of disinformation that you track or what are some of the trends you're seeing around disinformation lately?

WELTON CHANG

Yeah, so I think one thing to understand about Pyrra is that as a platform, we serve to enable our clients to pull back and find the kinds of content that they find concerning. So some of our existing clients are nonprofits who are investigating things like domestic extremism, anti vax movements, those types of anti trans rhetoric, those types of phenomena that are happening online. So the system is capable of detecting that content because what we're doing, essentially is pulling in data from all these different sites and then piping it through a pipeline that detects the narratives that our users care about. So our users will put in a narrative and then the system will say, hey, to what extent does what the user care about resemble the kind of content that we're pulling back and then surface the top, those things that score very highly. So from a technical perspective, we don't actually determine what disinformation the system is. It enables our clients to figure out when there's concerning content, concerning narratives that are floating around in these spaces. And it's not up to us to make that value judgment. Now, that being said, the other thing the system enables us to do is understand trends and understand what's going on across these communities. What are the topics that people are talking about? What are the things that come along with highly hateful, highly violent content? So that's the other thing that our system does is it detects when content is violent, offensive, hateful, and these other categories of speech that we've been building models for for the last several years. And so you can imagine the content that we're pulling back. When we see topics associated with highly violent thoughts, then we're able to show that and to say, hey, look, this is something that's of concern either to a client or just in general, things that we put out as part of our weekly intel products.

AUBREY BYRON

You mentioned that you don't just look at Twitter and Facebook, but you look into some more alternative social media sites. Why do you think it's important to look there as well?

WELTON CHANG

Yeah. So right now we actually don't do any Twitter or Facebook. Our focus is on this alternative space. And what we've observed over the last two, three years is that enforcement actions taken by major platforms, deep platforming users that are essentially frequent flyers of disinformation and hate and violence, it's pushed those folks and their audiences and communities to the fringe, to smaller sites like Gab Getter, these alternative spaces where, as far as we can tell, anything goes. There's a distinct lack of content moderation. Google recently came out and said they're no longer supporting some of these applications because of the amount of violent content that's on them, the amount of death threats that are on them. And that is very consistent with what we observe on these platforms is because there's no either will to do content moderation or it's by explicit policy, these sites say anything goes, free speech for all. That's what has led to this festering social phenomena of this kind of content that really snowballs, right? One user will put a false theory about what happens, say in Uvaldi, Texas with the school shooting there, and the next thing you know, other users are parroting that and then it gains a certain amount of traction, starts to move across these platforms, and in a blink of an eye, it's migrated over to Twitter. That's now the latest kind of conspiracy theory that's kind of floating around in these communities. And that's the reason why we think that the alternative space is where to pay attention to because it's more and more the hospitable host for highly violent, highly hateful content. And then it makes its way migrates over to the mainstream platforms once it's gained traction on these smaller sites.

AUBREY BYRON

Got you.

JEFF PHILLIPS

Yeah, that's super interesting. That's the source. And then it migrates.

AUBREY BYRON

Actually, I read an article not long ago about for open source intelligence researchers, social media sites that you should use in addition to Twitter because, you know, Twitter is the big hub. Do you have your own list of sites that you think in particular for understanding disinformation people should be looking at?

WELTON CHANG

I think the biggest ones for researchers to pay attention to are 4chan and Telegram public channels. So just in terms of like audience reach and engagement and activity, those two to me stand out as places where a lot of this content is being propagated, generated, originated. Discus comments actually, I think if you were to ask your audience the last time they went to go read comments below a news article, it can get pretty, I'll just say contentious in those comments. And we have found that the amount of noxious content on places like Discus is also a place to pay attention to. It's harder to monitor for individual researchers because you have to go to all these different places and essentially either run your own scripts or to use manual copy paste. And that's one of the benefits of our platform, is that we're ingesting from these platforms close to 100 million posts every single week, and that we retain all that data and then we also make it very easily searchable for our clients, take that heavy lift off their shoulders of having to go and do all this research manually. It's one of the core benefits of the peer platform.

AUBREY BYRON

That's awesome.

JEFF PHILLIPS

It is awesome. Maybe we poke a little bit on that in terms of machine learning. Came up earlier. So, what is the role of machine learning and algorithms in both, I guess in the rise of extremism and how is that technology and AI helping or hurting?

WELTON CHANG

Yeah, I mean, this is the name of your podcast, right? That's one of the reasons why I thought it was really apt when you have, let's say, let's say you start out with a billion posts, right? Whether that's how many posts happen on Facebook in like 12 hours or something like that, right. Even if your AI is 99% accurate, you're still pulling back millions of false positives, right? So that's the needle stack, right? If your haystack is that big, your needle stack winds up being almost incomprehensible for humans. And so what we try to do is first neck down. We think of it as a data funnel. At the top of that funnel is all of our injection capability. And then we start to neck down to just the topics that our users care about. So if they're a company, maybe they care about their CEO's name, maybe they care about their brand name, variations on their brand name, maybe some key locations like their headquarters building, things like that. And then we use our AI pipeline to further distill that content down to just the most violent. The most hateful you can sort by those different scores that are available on our platform so that you wind up going from tens of thousands of posts to just the handful that matter. The handful that you have to pay attention to. To do deep dives into the users that are propagating that information. To whether or not this is now a trend for a narrative that multiple people. Many different users believe that sort of thing. And that's, I think, where the AI comes in and is really helpful is going from that needle stack to just that handful of needles that you need to care about.

AUBREY BYRON

Can you talk a little bit about how the pandemic has affected the spread of disinformation? Have you seen a change and are there other major events that you're seeing play a role?

WELTON CHANG

I mean, part of this is just anecdotal and just from experience, and I think it sort of makes sense when you start thinking about how our lives changed as a result of having to do almost years of social distancing meeting only virtually instead of in person. I think we're just much more isolated and separated. And there's some research that backs us up, like, just overall, just more depressed population. We've all just kind of gone through this very, in some ways, traumatic experience of having to live with an endemic disease that was very, very harmful to millions of Americans. And I think that people saw social media as an outlet at that stage, right? Like, you can't get sick, really, from being on Twitter too much. Although maybe I should take that back mentally. Exactly. SoC we've seen an upsurge in activity, and of course, the pandemic itself was a topic, right? Whether it was people going after Medina and Pfizer for their vaccines, calling them unsafe, and then all the way to the conspiracy theories about Bill Gates and mind control and using that there are nanoparticles inside the vaccines that we're going to communicate with 5G towers to basically control you. We all know that none of that is true. But there are communities of folks that have grown up around the pandemic that continue to believe that these things are happening in our society today. And you can't blame social media by itself. You can't blame the pandemic by itself. These factors are tied up in kind of broader societal trends and issues that we observe, whether it's our public education system and our ability to think critically and how fast things move online. And it all kind of played into this melting pot that we now observe as the alternative social media space where the crazy really takes hold and people believe all sorts of really far out their outlandish theories about how the world works.

JEFF PHILLIPS

Well, you and Pyrra operate in a really interesting space. Can you go home and sleep at night after falling all this stuff? But as we start to wrap up, maybe any thoughts about any tips for professional researchers, things you found useful you might share with the audience?

WELTON CHANG

Yeah, I mean, you touched on a little bit self care. You should be cognizant of your own mental state as you wade through some of this content, especially the more toxic stuff that's out there. And if any of your audience deals with things like child sexual abuse, material things that are happening on the Dark Web, it can be greatly affecting over time. And I'm a father of two. You know, I've unfortunately had the experience of having gone overseas twice when I was in the army and at the IA two tours in Iraq, and I consider myself to be a fairly mentally strong person. But even some of the content that we continue to observe, it can be affecting emotionally. So just having an understanding of where your own mental state is at is super important. And then I think, whether or not I can sleep well at night, a big part of that is just for your audience to know that the techniques that they apply to some of their work can be turned back on to them. As we've gotten more popular as a company and as we've gotten more public about our activities, we've seen threats come our way, and so we've taken countermeasures to eliminate our own data from places like data brokerages and try to maintain a firewall between some of the company activities in our own personal lives. I don't have an Instagram account or a Facebook account. Right. So I think it's important for your audience to take those measures as well, because if they're really good at something, other people can get close to that right. And you can take those same techniques and turn them back on us as concerned citizens, as professionals who work in this space. And it can be really just a strange feeling when you're in the crosshairs and, you know, no wish on anybody. It's something that kind of comes with the territory of the work that we do. But remind yourself that the things that we do are important for the health of our democracy as well as for our families in the long run, and that's why we put up with things like getting threats on our way.

JEFF PHILLIPS

That's great advice. And using some of your skills again to look at yourself, I think that's super practical. I want to thank our guest, Dr. Welton Chang. If you liked what you heard, you can subscribe to our show wherever you get your podcasts, watch episodes on YouTube and view transcripts and other episode info on our website, authentic8.com/needlestack. That's authentic with the number eight dot com slash needlestack. And be sure to follow us on Twitter @needlestack_pod. We'll be back next time with more on fact checking and debunking. We'll see you then.

Keep listening

podcast-icon
Podcast

S1E28 | Why Russian disinformation is…
S1E28 | Why Russian disinformation is so effective

podcast-icon
Podcast

S1E30 | How to perform a fact-check:…
S1E30 | How to perform a fact-check: from start to finish

podcast-icon
Podcast

S1E25 | Autopsy of a story: the art and…
S1E25 | Autopsy of a story: the art and science of fact-checking

Close
Close