Moscow’s Mission: New Rules, Old Tricks
How platforms, newsrooms, and government adapted after 2016
Voice & Vision | Moscow’s Mission, Part 4 | Labels, takedowns, and adaptation The last installment of this series, part 3: After the Click, discussed how the U.S. was affected by this assault. This time we discuss what the media and government did in response. Platforms, newsrooms, and governments didn’t stand still after 2016. They built labels, ad libraries, takedown teams, and rumor control pages. Some of this worked. A lot of it just taught adversaries what to avoid next time. Regardless, patching a moving boat is still better than doing nothing, but the dangerous waters kept finding new ways in. (ODNI; Meta disclosures; CISA)
Platforms moved first and most visibly. Facebook, for better or worse, created political ad transparency tools and began removing networks that showed coordinated inauthentic behavior. Twitter added labels for state-affiliated outlets and later for manipulated media. YouTube expanded information panels that point to authoritative sources. Each change added friction and sunlight, and each one also revealed new blind spots. (Meta disclosures; Twitter transparency materials; YouTube policy posts)
Government posture shifted as well. The FBI, Office of the Director of National Intelligence (ODNI), and Cybersecurity and Infrastructure Security Agency (CISA) issued joint statements before federal elections (even if they weren’t believed by much of the population) that warned about ongoing foreign activity, including the possibility of fabricated audio or video meant to inflame domestic narratives. CISA stood up election rumor control resources and pushed basic hygiene: verify before you share, expect late-breaking “leaks,” and check local officials first for results. That guidance helped lower the temperature in some places, but the warnings were delivered to a fragmented trust environment. (FBI-ODNI-CISA joint statements; CISA)
Newsrooms adapted their playbooks. Many built verification desks, slowed down on publishing from stolen caches without context, and partnered with forensics teams. Editors learned to ask who benefits from the timing of a “drop.” But, they still faced the old incentive trap. Novelty gets clicks. A leaked document is hard to ignore even when the provenance is murky. That is why “hack and leak” remains attractive. It steers the agenda for days at a time. (AP reporting; WIRED reporting)
Adversaries learned, quickly. Fewer big obvious networks, more small clusters that look local. More cross-posting through real users. More video and audio, which travel faster and resist automated screening. Some activity moved into private or semi-private spaces where moderation is lighter and attribution is harder. The goal stayed the same: ride the edges of enforcement while keeping the look and feel of homegrown content. (ODNI; Meta disclosures)
Domestic imitation grew alongside foreign efforts. Once tactics are public, copycats don’t need a training manual. Anonymous accounts, memes that target identity, and selective “receipts” from hacked or context-free materials became the house style for a lot of political feeds. The line between foreign and domestic manipulation is now a pattern, not a logo. That complicates enforcement and public understanding. (SSCI; AP reporting) Policy has been patchwork at best. Congress held hearings and floated bills that touched transparency, data access for researchers, and platform responsibility. States experimented with disclosure rules and election information hubs. Section 230 debates ran hot while concrete national standards lagged. The result has been uneven expectations and uneven enforcement across platforms and jurisdictions. (ODNI; AP reporting)
However, what worked is worth noting. Faster takedowns of well-documented networks, better ad transparency, and more open archives for researchers made a difference. But, what didn’t work is just as clear. Speed, scale, and context collapse still favor attackers. Corrections arrive slower than claims. Screenshots live forever. When identity is the hook, a label is only a speed bump. (Meta disclosures; Science 2023 Facebook and Instagram experiments)
Here is the bottom line. We patched holes, and we should keep patching. We also have to plan as if the next wave is already studying our fixes. That means more transparency, more data for independent study, clearer civic information from officials, and steady pressure to reward accuracy over spectacle. None of that is a silver bullet, it just narrows the cracks. (ODNI; CISA; AP reporting)
This problem is not going away. In fact it is likely only to get bigger and much worse in the coming years. The United States is teetering on the brink. There is no shortage of nefarious groups undermining the edge we stand on as well as trying to push us off. Their greatest advantage? The unshakable inability of the average American to use thought and reason…oh, and to cooperate with someone from the other side of the isle. Next in Part 5, we focus on the attacker’s edge in this environment, and why speed and certainty keep beating patience and proof. If this gave you a clear map of what changed and what did not, share it with someone who thinks labels solved the problem. Sources • ODNI — Declassified assessments on Russian interference and foreign influence in the 2016 and 2020 election cycles. • SSCI — U.S. Senate Select Committee on Intelligence, multi-volume bipartisan report on Russian active measures and social media operations. • Meta disclosures — Public reports and archives on coordinated inauthentic behavior and policy updates. • FBI-ODNI-CISA joint statements — Pre-election advisories on foreign influence risks, including warnings about fabricated media. • CISA — Election security guidance and rumor control resources. • AP reporting — Associated Press coverage of platform policy changes, election security posture, and interference cases. • WIRED reporting — Industry reporting on takedowns, platform enforcement, and influence adaptations. • Science (2023) — Facebook and Instagram experiments during the 2020 U.S. election, which inform what platform changes did and did not do.