Friday, January 12, 2024

My book, Algorithms and Misinformation

Misinformation and disinformation are the biggest problems on the internet.

To solve a problem, you need to understand the problem. In Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It, I claim that the problem is not that misinformation exists, but that so many people see it. I explain why algorithms amplify scams and propaganda, how it easily can happen unintentionally, and offer solutions.

You can read much of the book for free. If you want a single article summary, this overview describes the entire book:

If you are interested in what you might get from skimming the book, you might be interested in a bit more: If you want part of what you might get from reading the entire book, you may want all the excerpts: I wanted this book to be a part of the debate on how to solve misinformation and disinformation on the internet. This book offers some practical solutions. It was intended to be an essential part of the discussion about viable solutions to what has become one of the biggest problems of our time.

I wrote, developed, and edited this book over four years. It was under contract with two agents for a year but will not be published. The full manuscript had many more examples, interviews, and stories, but you can get some of what you would have gotten by reading the book by reading all the excerpts above.

Some might want to jump straight to ideas for solutions. I think solutions depend on who you are.

For those inside of tech companies, this book shows how other companies have fixed this and made more revenue. Because it's easy for executives to unintentionally cause search and recommendations to amplify scams, it's important for everyone to question what algorithms are optimized for and make sure they point toward the long-term growth of the company.

For the average person, because the book shows companies actually make more money when they don't allow their algorithms to promote scams, this book gives hope that complaining about scammy products and stopping use of those products will change the internet we use every day.

For policy makers, because it's hard to regulate AI but easy to regulate what they already know how to regulate, this book claims they should target scammy advertising that funds misinformation, increase fines for promoting fraud, and ramp up antitrust efforts (to increase consumers' ability to switch to alternatives and further raise long-term costs on companies that enshittify their products).

Why these are the solutions requires exploring the problem. Most of the book is about how companies build their algorithms -- optimizing them over time -- and how that can accidentally amplify misinformation. To solve the problem, focus not on that misinformation exists, but that people see too much misinformation and disinformation. If the goal is to reduce it to nuisance levels, we can fix misinformation on the internet.

Through stories, examples, and research, this book showed why so many people see misinformation and disinformation, that it is often unintentional, and that it doesn't maximize revenue for companies. Understanding why we see so much misinformation is the key to coming up with practical solutions.

I hope others find this useful. If you do, please let me know.

Wednesday, January 10, 2024

Book excerpt: Conclusion

(This is one version of the conclusion from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of crowds is the idea that summarizing the opinions of a lot of people often is very useful. Computers can do this too. Wisdom of the crowd algorithms operating at a massive scale pick everything we see when we use the internet.

Computer algorithms look at people's actions as if they were votes for what is interesting and important. Search and recommendations on your favorite websites combine what millions of people do to help you find what you need.

In recent years, something has gone terribly wrong. Wisdom of the crowds has failed us.

Misinformation and scammers are everywhere on the internet. You cannot buy something from Amazon, see what friends are doing on Facebook, or try to read news online without encountering fraudsters and propagandists.

This is the story of what happened and how to fix it, told by the insiders that built the internet we have today.

Throughout the last thirty years of the internet, we fought fraudsters and scammers trying to manipulate what people see. We fought scammers when we built web search. We fought spammers trying to get into our email inboxes. We fought shills when we built algorithms recommending what to buy.

Seeing these hard battles through the lens of insiders reveals an otherwise hidden insight: how companies optimize their algorithms is what amplifies misinformation and causes the problems we have today.

The problem is not the algorithm. The problem is how algorithms are tuned and optimized.

Algorithms will eventually show whatever the team is rewarded for making the algorithms show. When algorithms are optimized badly, they can do a lot of harm. Through the metrics and incentives they set up, teams and executives control these algorithms and how they are optimized over time.

We have control. People control the algorithms. We should make sure these algorithms built by people work well for people.

Wisdom of the crowd algorithms such as recommender systems, trending, and search rankers are everywhere on the internet. Because they control what billions see every day, these algorithms are enormously valuable.

The algorithms are supposed to work by sharing what people find interesting with other people who have not seen it yet. They can enhance discovery and help people discover things they would not have found on their own, but only if they are optimized and tuned properly.

Short-term measures like clicks are bad metrics for algorithms. These metrics encourage scams, sensationalistic content, and misinformation. Amplifying fraudsters and propagandists creates a terrible experience for customers and eventually hurts the company.

Wisdom of the crowds doesn’t work when the crowds are fake. Wisdom of the crowds will amplify scams and propaganda if a few people can shout down everyone else with their hordes of bots and shills. Wisdom of the crowds requires information from real, independent people.

If executives tell teams to optimize for clicks, it can be hard to remove fake accounts, shills, and sockpuppets. Click metrics will be higher when bad actors shill because fake crowds faking popularity looks like a lot of new fake accounts creating a lot of new fake engagement. But none of it is real, and none of it helps the company or its customers in the long run.

Part of the solution is only using reliable accounts for wisdom of crowds. Wisdom of the trustworthy makes it much harder for bad actors to create fake crowds and feign popularity. Wisdom of the trustworthy means algorithms only use provably human and reliable accounts as input to the algorithms. To deter fraudsters from creating lots of fake accounts, trust must be hard to gain and easy to lose.

Part of the solution is to recognize that most metrics are flawed proxies for what you really want. What companies really want is satisfied customers that stay with you for a long time. Always question whether your metrics are pointing you at the right target. Always question if your wisdom of the crowd algorithms are usefully helping customers.

It's important to view optimizing algorithms as investing in the long-term. Inside tech companies, to measure the success of those investments and the long-term success of the company, teams should run long experiments to learn more about long-term harm and costs. Develop metrics that approximate long-term retention and growth. Everyone on teams should constantly question metrics and frequently change goal metrics to improve them.

As Google, Netflix, YouTube, and Spotify have discovered, companies make more money if they invest in good algorithms that don't chase clicks.

Even so, some companies may need encouragement to focus on the long-term, especially if their market power means customers have nowhere else to go.

Consumer groups and policy makers can help by pushing for more regulation of the advertising that funds scams, antitrust enforcement to maintain competition and offer alternative to consumers that are fed up with enshittified products, and the real threat of substantial and painful fines for failing to minimize scams and fraud.

We can have the internet we want. We can protect ourselves from financial scams, consumer fraud, and political propaganda. We can fix misinformation on the internet.

With a deeper understanding of why wisdom of the crowd algorithms can amplify misinformation and cause harm, we can fix seemingly ungovernable algorithms.

Monday, January 08, 2024

Book excerpt: A win-win-win for customers, companies, and society

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Everyone wins -- companies, consumers, and society -- if companies fix their algorithms to stop amplifying scams and misinformation.

Executives are often tempted to reward their teams for simpler success metrics like engagement. But companies make more money if they focus on long-term customer satisfaction and retention.

YouTube had a problem. They asked customers, “What’s the biggest problem with your homepage today?” The answer came back: “The #1 issue was that viewers were getting too many already watched videos on their homepage.” In our interview, YouTube Director Todd Beaupré discussed how YouTube made more money by optimizing their algorithms for diversity, customer retention, and long-term customer satisfaction.

YouTube ran experiments. They found that reducing already watched recommendations reduced how many videos people watched from their home page. Beaupré said, “What was surprising, however, was that viewers were watching more videos on YouTube overall. Not only were they finding another video to enjoy to replace the lost engagement from the already watched recommendations on the homepage, they found additional videos to watch as well. There were learning effects too. As the experiment ran for several months, the gains increased.”

Optimizing not for accuracy but for discovery turned out to be one of YouTube’s biggest wins. Beaupré said, “Not only did we launch this change, but we launched several more variants that reduced already watched recommendations that combined to be the most impactful launch series related to growing engagement and satisfaction that year.”

Spotify researchers found the same thing, that optimizing for engagement right now misses a chance to show something that will increase customer engagement in the future. They said, “Good discoveries often lead to downstream listens from the user. Driving discovery can help reduce staleness of recommendations, leading to greater user satisfaction and engagement, thereby resulting in increased user retention. Blindly optimizing for familiarity results in potential long term harms.” In the short-term, showing obvious and familiar things might get a click. In the long-term, helping customers discover new things leads to greater satisfaction and better retention.

Companies that don't optimize for engagement make more money. In a paper “Focus on the Long-Term: It’s Better for Users and Business,” Googlers wrote that “optimizing based on short-term revenue is the obvious and easy thing to do, but may be detrimental in the long-term if user experience is negatively impacted.” What can look like a loss in short-term revenue can actually be a gain in long-term revenue.

Google researchers found that it was very important to measure long-term revenue because optimizing for engagement ignores that too many ads will make people ignore your ads or stop coming entirely. Google said investing in cutting ads in half in their product improved customer satisfaction and resulted in a net positive change in ad revenue, but they could only see that they made more money when they measured over long periods of time.

Netflix uses very long experiments to keep their algorithms targeting long-term revenue. From the paper "Netflix Recommender System": “We ... let the members in each [experimental group] interact with the product over a period of months, typically 2 to 6 months ...The time scale of our A/B tests might seem long, especially compared to those used by many other companies to optimize metrics, such as click-through rates ... We build algorithms toward the goal of maximizing medium-term engagement with Netflix and member retention rates ... If we create a more compelling service by offering personalized recommendations, we induce members who were on the fence to stay longer, and improve retention.”

Netflix's goal is keeping customers using the product. If customers stay, they keep generating revenue, which maximizes long-term business value. “Over years of development of personalization and recommendations, we have reduced churn by several percentage points. Reduction of monthly churn both increases the lifetime of an existing subscriber and reduces the number of new subscribers we need to acquire.”

Google revealed how they made more money when they did not optimize for engagement. Netflix revealed they focus on keeping people watching Netflix for many years, including their unusually lengthy experiments that sometimes last over a year, because that makes them more money. Spotify researchers revealed how they keep people subscribing longer when they suggest less obvious, more diverse, and more useful recommendations, making them more money. YouTube, after initially optimizing for engagement, switched to optimizing for keeping people coming back to YouTube over years, finding that is what made them the most money in the long run.

Scam-filled, engagement-hungry, or manipulated algorithms make less money than helpful algorithms. Companies such as Google, YouTube, Netflix, Wikipedia, and Spotify offer lessons for companies such as Facebook, Twitter, and Amazon.

Some companies know that adversaries attack and shill their algorithms because the profit motive is so high from getting to the top of trending algorithms or recommendations. Some companies know that if they invest in eliminating spam, shilling, and manipulation, that investment will pay off in customer satisfaction and higher growth and revenue in the future. Some companies align the interests of their customers and the company by optimizing algorithms for long-term customer satisfaction, retention, and growth.

Wisdom of the crowds failed the internet. Then the algorithms that depend on wisdom of the crowds amplified misinformation across the internet. Some already have shown the way to fix the problem. If all of us borrow lessons from those that already have solutions, we can solve the problem of algorithms amplifying misinformation. All companies can fix their algorithms, and they will make more money if they do.

Many executives are unaware of the harms of optimizing for engagement. Many do not realize when they are hurting the long-term success of the company.

This book has recommendations for regulators and policy makers, focusing their work on incentives including executive compensation and the advertising that funds misinformation and scams. This book provides examples to teams inside companies of why they should not optimize for engagement and what companies do instead. And this book provides evidence consumers can use to advocate for companies better helping their customers while also increasing profits for the company.

Sunday, January 07, 2024

Book excerpt: Use only trustworthy behavior data

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Adversaries manipulate wisdom of crowds algorithms by controlling a crowd of accounts.

Their controlled accounts can then coordinate to shill whatever they like, shout down opposing views, and create an overwhelming flood of propaganda that makes it hard for real people to find real information in the sea of noise.

The Aspen Institute Commission, in a report titled Commission on Information Disorder, suggests the problem is often confined to a surprisingly small number of accounts, amplified by coordinated activity from other controlled accounts.

They describe how it works: “Research reveals that a small number of people and/or organizations are responsible for a vast proportion of misinformation (aka ‘superspreaders’) ... deploying bots to promote their content ... Some of the most virulent propagators of falsehood are those with the highest profile [who are often] held to a lower standard of accountability than others ... Many of these merchants of doubt care less about whether they lie, than whether they successfully persuade, either with twisted facts or outright lies.”

The authors of this report offer a solution. They suggest that these manipulative accounts should not be amplified by algorithms, making the spreading of misinformation much more costly and much more difficult to do efficiently.

Specifically, they argue social media companies and government regulators should “hold superspreaders of mis- and disinformation to account with clear, transparent, and consistently applied policies that enable quicker, more decisive actions and penalties, commensurate with their impacts — regardless of location, or political views, or role in society.”

Because just a few accounts, supported by substantial networks of controlled shill accounts, are the problem, they add that social media should focus “on highly visible accounts that repeatedly spread harmful misinformation that can lead to significant harms.”

Problems with adversaries manipulating, shilling, and spamming have a long history. One way to figure out how to solve the problem is to look at how others mitigated these issues in the past.

Particularly helpful are the solutions for web spam. As described in the research paper "Web Spam Detection with Anti-Trust Rank", web spam is “artificially making a webpage appear in the top results to various queries on a search engine.” The web spam problem is essentially the same problem faced by social media rankers and recommenders. Spammers manipulate the data that ranking and recommender algorithms use to determine what content to surface and amplify.

The researchers described how bad actors create web spam: “A very common example ... [is] creating link farms, where webpages mutually reinforce each other ... [This] link spamming also includes ... putting links from accessible pages to the spam page, such as posting web links on publicly accessible blogs.”

This is essentially the same techniques used by adversaries for social media; adversaries use controlled accounts and bots to post, reshare, and like content, reinforcing how popular it appears.

To fix misinformation on social media, learn from what has worked elsewhere. TrustRank is a popular and widely used technique in web search engines to reduce the efficiency, effectiveness, and prevalence of web spam. It “effectively removes most of the spam” without negatively impacting non-spam content.

How does it work? “By exploiting the intuition that good pages -- i.e. those of high quality -- are very unlikely to point to spam pages or pages of low quality.”

The idea behind TrustRank is to start from the trustworthy and view the actions of those trustworthy people to also be likely to be trustworthy. Trusted accounts link to, like, share, and post information that is trustworthy. Everything they say is trustworthy is now mostly trustworthy too, and the process repeats. In this way, trust gradually propagates out from a seed of known reliable accounts to others.

As the "Combating Web Spam with TrustRank" researchers put it, “We first select a small set of seed pages to be evaluated by an expert. Once we manually identify the reputable seed pages, we use the link structure of the web to discover other pages that are likely to be good ... The algorithm identifies other pages that are likely to be good based on their connectivity with the good seed pages.”

TrustRank works for web spam in web search engines. “We can effectively filter out spam from a significant fraction of the web, based on a good seed set of less than 200 sites.” Later work suggested adding in Anti-Trust Rank has some benefits as well, which works by taking a set of known untrustworthy people who have a history of spamming, shilling, and attempting to manipulate the ranker algorithms, then assuming that everything they have touched are all also likely to be untrustworthy.

In social media, much of the problem is not that bad content exists at all, but that bad content is amplified by algorithms. Specifically, rankers and recommenders on social media look at likes, shares, and posts, then think that shilled content is popular, so the algorithms share the shilled content with others.

The way this works, both for web search and for social media, is that wisdom of the crowd algorithms including rankers and recommenders count votes. A link, like, click, purchase, rating, or share is a vote that a piece of content is useful, interesting, or good. What is popular or trending is what gets the most votes.

Counting votes in this way easily can be manipulated by people who create or use many controlled accounts. Bad actors vote many times, effectively stuffing the ballot box, to get what they want on top.

If wisdom of crowds only uses trustworthy data from trustworthy accounts, shilling, spamming, and manipulation becomes much more difficult.

Only accounts known to be trustworthy should matter for what is considered popular. Known untrustworthy accounts with a history of being involved in propaganda and shilling should have their content hidden or ignored. And unknown accounts, such as brand new accounts or accounts that have no connection to trustworthy accounts, also should be ignored as potentially harmful and not worth the risk of including.

Wisdom of the trustworthy dramatically raises the costs for adversaries. No longer can a few dozen accounts, acting together, successfully shill content.

Now, only trustworthy accounts amplify. And because trust is hard to gain and easily lost, disinformation campaigns, propaganda, shilling, and spamming often become cost prohibitive for adversaries.

As Harvard fellow and security expert Bruce Schneier wrote in a piece for Foreign Policy titled “8 Ways to Stay Ahead of Influence Operations,” the problem is recognizing these fake accounts that are all acting together in a coordinated way to manipulate the algorithms and not using their data to inform ranker and recommender algorithms.

Schneier wrote, “Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize.”

Shills and trolls are shilling and trolling. That is not normal human behavior.

Real humans don’t all act together, at the same time, to like and share some new content. Real humans cannot act many times per second or vote on content they have never seen. Real humans cannot all like and share content from a pundit as soon as it appears and then all do it again exactly in the same way for the next piece of content from that pundit.

When bad actors use controlled fake accounts to stuff the ballot box, the behavior is blatantly not normal.

There are a lot of accounts in social media today that are being used to manipulate the wisdom of the crowd algorithms. Their clicks, likes, and shares are bogus and should not be used by the algorithms.

Researchers in Finland studying the phenomenon back in 2021 wrote that “5-10% of Twitter accounts are bots and responsible for the generation of 20-25% of all tweets.” The researchers describe these compromised accounts as “cyborgs” and write that they “have characteristics of both human-generated and bot-generated accounts."

These controlled accounts are unusually active, producing a far larger percentage of all tweets than the percentage of accounts they represent. This also was a low estimate on the total amount of manipulated accounts in social media as it did not include compromised accounts, accounts that are paid to shill, or accounts paid to disclose their password so they can sometimes be used by someone else to shill.

Because bad actors using accounts to spam and shill must quickly act in concert to spam and shill, and often do so repeatedly with the same accounts, their behavior is not normal. Their unusually active and unusually timed actions can be detected.

One detection tool published by researchers at the American Association for Artificial Intelligence (AAAI) conference was a “classifier ... capturing the local and global variations of observed characteristics along the propagation path ... The proposed model detected fake news within 5 min of its spread with 92 percent accuracy for Weibo and 85 percent accuracy for Twitter.”

Professor Kate Starbird, who runs a research group studying disinformation at University of Washington, wrote how social media companies have taken exactly the wrong approach, exempting prominent accounts associated with misinformation, disinformation, and propaganda rather than subjecting them and their shills to skepticism and scrutiny. Starbird wrote, “Research shows that a small number of accounts have outsized impact on the spread of harmful misinfo (e.g. around vaccines and false/misleading claims of voter fraud). Instead of whitelisting these prominent accounts, they should be held to higher levels of scrutiny and accountability.”

Researchers have explained the problem, being willing to amplify anything that isn’t provably bad rather than only amplifying that which is known to be trustworthy. In a piece titled Computational Propaganda, Stanford Internet Observatory researcher Renee DiResta wrote, “Our commitment to free speech has rendered us hesitant to take down disinformation and propaganda until it is conclusively and concretely identified as such beyond a reasonable doubt. That hesitation gives ... propagandists an opportunity.”

The hesitation is problematic, as it makes it easy to manipulate wisdom of crowds algorithms. “Incentive structures, design decisions, and technology have delivered a manipulatable system that is being gamed by propagandists,” DiResta said. “Social algorithms are designed to amplify what people are talking about, and popularity is ... easy to feign.”

Rather than starting from the assumption that every account is real, the algorithms should start with the assumption that every account is fake.

Only provably trustworthy accounts should be used by wisdom of the crowd algorithms such as trending, rankers, and recommenders. When considering what is popular, not only should fake accounts coordinating to shill be ignored, but also there should be considerable skepticism toward new accounts that have not been proven to be independent of the others.

With wisdom of crowds algorithms, rather than think of which accounts should be banned and not used, consider the minimum number of trustworthy accounts needed to not lower the perceived quality of the recommendations. There is no reason to use all the data when the biggest problem is shilled and untrustworthy data.

Companies are playing whack-a-mole with bad actors who just create new accounts or find new shills every time they’re whacked because it’s so profitable -- like free advertising -- to create fake crowds that manipulate the algorithms.

Propagandists and scammers are loving it and winning. It’s easy and lucrative for them.

Rather than classify accounts as spam, classify accounts as trustworthy. Only use trustworthy data as input to the algorithms, ignoring anything unknown or borderline as well as known spammers and shills.

Toss big data happily, anything suspicious at all. Do not be concerned about false positives galore accidentally marking new accounts or borderline accounts as shills when deciding what to input to the recommender algorithms. None of that matters if it does not reduce the perceived quality of the recommendations.

As with web spam and e-mail spam, the goal isn’t eliminating manipulation, coordination, disinformation, scams, and propaganda.

The goal is raising the costs on adversaries, ideally to the point where most of it is no longer cost-effective. If bad actors no longer find it easy and effective to try to manipulate recommender systems on social media, most will stop.

Thursday, January 04, 2024

Book excerpt: Data and metrics determine what algorithms do

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of the crowd algorithms, including rankers and recommenders, work from data about what people like and do. Teams inside tech companies gather user behavior data then tune and optimize algorithms to maximize measurable targets.

The data quality and team incentives control what the algorithms produce and how useful it is. When the behavior data or goal metrics are bad, the outcome will be bad. When the wisdom of the crowds data is trustworthy and when the algorithms are optimized for the long-term, algorithms like recommendations will be useful and helpful.

Queensland University Professor Rachel Thomas warned that “unthinking pursuit of metric optimization can lead to real-world harms, including recommendation systems promoting radicalization .... The harms caused when metrics are overemphasized include manipulation, gaming, a focus on short-term outcomes to the detriment of longer-term values ... particularly when done in an environment designed to exploit people’s impulses and weaknesses."

The problem is that “metrics tend to overemphasize short-term concerns.” Thomas gave as an example the problems that YouTube had before 2017 because they years earlier picked “watch time” (how long people spend watching videos) as a proxy metric for user satisfaction. An algorithm that tries to pick videos people will watch right now will tend to show anything to get a click including risqué videos or lies that get people angry. So YouTube struggled with their algorithms amplifying sensationalistic videos and scams. These clickbait videos looked great on short-term metrics like watch time but repelled users in the long-term.

“AI is very effective at optimizing metrics,” Thomas said. Unfortunately, if you pick the wrong metrics, AI will happily optimize for the wrong thing. “The unreasonable effectiveness of metric optimization in current AI approaches is a fundamental challenge to the field and yields an inherent contradiction: solely optimizing metrics leads to far from optimal outcomes.”

Unfortunately, it’s impossible to get a perfect success metric for algorithms. Not only are metrics “just a proxy for what you really care about,” but also all “metrics can, and will be gamed.” The goal has to be to make the success metrics as good as possible and keep fixing the metrics as they drift away from the real goal of the long-term success of the company. Only by constantly fixing the metrics will teams optimize the algorithms to help the company grow and profit over the years.

A classic article by Steven Kerr, “On the folly of rewarding A while hoping for B,” was originally published back in 1975. The author wrote: “Many managers seek to establish simple, quantifiable standards against which to measure and reward performance. Such efforts may be successful in highly predictable areas within an organization, but are likely to cause goal displacement when applied anywhere else.”

Machine learning algorithms need a target. Teams need to have success metrics for algorithms so they know how to make them better. But it is important to recognize that metrics are likely to be wrong and to keep trying to make them better.

You get what you measure. When managers pick a metric, there are almost always rewards and incentives tied to that metric. Over time, as people optimize for the metric, you will get that metric maximized, often at the expense of everything else, and often harming the true goals of the organization.

Kerr went on to say, “Explore what types of behavior are currently being rewarded. Chances are excellent that ... managers will be surprised by what they find -- that firms are not rewarding what they assume they are.” An editor when Kerr's article was republished in 1995 summarized this as, “It’s the reward system, stupid!”

Metrics are hard to get right, especially because they often end up being a moving target over time. The moment you put a metric in place, people both inside and outside the company will start to find ways to succeed against that metric, often finding cheats and tricks that move the metric without helping customers or the company. It's as Goodhart’s Law says: “When a measure becomes the target, it ceases to be an effective measure.”

One familiar example to all of us is the rapid growth of clickbait headlines -- “You won’t believe what happens next” -- that provide no value but try to get people to click. This happened because the headline writers were rewarded for getting a click, whether or not they do it through deception. When what the organization optimizes is getting a click, teams will drive clicks.

Often companies pick poor success metrics such as clicks just because it is too hard to measure the things that matter most. Long-term metrics that try to be good proxies for what we really care about such as retention, long-term growth, long-term revenue, and customer satisfaction can be costly to measure. And, because of Goodhart’s Law, the metrics will not work forever and will need to be changed over time. Considerable effort is necessary.

Many leaders don’t realize the consequences of not putting in that effort. You will get what you measure. Unless you reward teams for the long-term growth and profitability of the company, teams will not optimize for the success of the company or shareholders.

What can companies do? Professor Thomas went on to say that companies should “use a slate of metrics to get a fuller picture and reduce gaming” which can “keep metrics in their place.” The intent is that gaming of one metric may be visible in another, so a slate with many metrics may show problems that otherwise might be missed. Another idea is changing metrics frequently, which also can reduce gaming and provides an opportunity to adjust metrics so they are closer to the true target.

Getting this wrong causes a lot of harm to the company and sometimes to others as well. “A modern AI case study can be drawn from recommendation systems,” Thomas writes. “Platforms are rife with attempts to game their algorithms, to show up higher in search results or recommended content, through fake clicks, fake reviews, fake followers, and more.”

“It is much easier to measure short-term quantities [such as] click-through rates,” Thomas said. But “many long-term trends have a complex mix of factors and are tougher to quantify.” There is a substantial risk if teams, executives, and companies get their metrics wrong. “Facebook has been the subject of years’ worth of ... scandals ... which is now having a longer-term negative impact on Facebook’s ability to recruit new engineers” and grow among younger users.

As Googler and AI expert François Chollet once said, “Over a short time scale, the problem of surfacing great content is an algorithmic problem (or a curation problem). But over a long time scale, it's an incentive engineering problem.”

It is the optimization of the algorithms, not the algorithms themselves, that determine what they show. Incentives, rewards, and metrics that determine what wisdom of the crowd algorithms do. That is why metrics and incentives are so important.

Get the metrics wrong, and the long-term costs for the company — stalled growth, poor retention, poor reputation, regulatory risk — become worse and worse. Because the algorithms are optimized over time, it is important to be constantly fixing the data and metrics to make sure they are trustworthy and doing the right thing. Trustworthy data and long-term metrics lead to algorithms that minimize scams and maximize long-term growth and profits.

Wednesday, January 03, 2024

Book excerpt: From hope to despair and back to hope

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Twenty five years ago, when recommendation algorithms first launched at large scale on the internet, these algorithms helped people discover new books to read and new movies to watch.

In recent years, wisdom of the crowds failed the internet, and the internet filled with misinformation.

The story of why this happened — and how to fix it — runs through the algorithms that pick what we see on the internet. Algorithms use wisdom of the crowds at a massive scale to find what is popular and interesting. That is how they determine what to show to millions of people. When these algorithms fail, misinformation flourishes.

The reason the algorithms fail is not what you think. It is not the algorithms.

Only with an insider view can readers see how the algorithms work and how tech companies build these algorithms. The surprise is that the algorithms are actually made of people.

People build and maintain these algorithms. Wisdom of the crowds works using data about what people do. The key to why algorithms go wrong, and how they can be fixed, runs through people and the incentives people have.

When bad actors manipulate algorithms, they are trying to get their scams and misinformation seen by as many people as possible as cheaply as possible.

When teams inside companies optimize algorithms, they are trying to meet the goals executives set for them, whatever those goals are and regardless of whether they are the right goals for the company.

People’s incentives control what the algorithms do. And incentives are the key to fixing misinformation on the internet.

To make wisdom of the crowds useful again, and to make misinformation ineffective, all companies must use only reliable data and must not optimize their algorithms for engagement. As this book shows, these solutions reduce the reach of misinformation, making it far less effective and far more expensive for scammers and fraudsters.

We know these solutions work because some companies did it. Exposing a gold mine of knowledge buried deep inside the major tech companies, this book shows that some successfully stopped their algorithms from amplifying misinformation by not optimizing for engagement. And, importantly, these companies made more money by doing so.

Companies that have not fixed their algorithms have taken a dark path, blinded by short-term optimization for engagement, and the teams deceived by bad incentives and bad metrics inside of their companies. This book shows the way out for those led astray.

People inside and outside the powerful tech companies, including consumers and policy makers, can help align incentives away from short-term engagement and toward long-term customer satisfaction and growth.

It turns out it's a win-win to listen to consumers and optimize algorithms to be helpful for your customers in the long-term. Nudging people's incentives in practical ways is easier once you see inside the companies, understand how they build these algorithms, and see that companies make more money when they do not myopically optimize their algorithms in ways that later will cause a flood of misinformation and scams.

Wisdom of the crowd algorithms are everywhere on the internet. Readers of this book started out feeling powerless to fix the algorithms that control everything we see and the misinformation these algorithms promote. Readers of this book end this book hopeful and ready to push for change.

Wednesday, December 27, 2023

Book excerpt: First pages of the book

(This is an excerpt from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It". The first sentence and first page of a book hook readers in. This book starts with an entertaining tale about algorithms and their importance at the beginning of Amazon.com)

The old brick building for Amazon headquarters in 1997 was in a grimy part of downtown Seattle across from a needle exchange and Wigland.

There were only a couple dozen of us, the software engineers. We sat at desks made of unfinished four-by-fours bolted to what should have been a door. Exhausted from work, sometimes we slept on the floor of our tiny offices.

In my office, from the look of the carpet, somebody had spilled coffee many times. A soft blue glow from a screen showing computer code lit my face. I turned to find Jeff Bezos in my doorway on his hands and knees.

He touched his forehead down to the filthy floor. Down and up again, his face disappeared and reappeared as he bowed.

He was chanting: “I am not worthy. I am not worthy.”

What could cause the founder of Amazon, soon to be one of the world’s richest men, to bow down in gratitude to a 24-year-old computer programmer? An algorithm.

Algorithms are computer code, often created in the wee hours by some geek in a dingy room reeking of stale coffee. Algorithms can be enormously helpful. And they can be equally harmful. Either way they choose what billions of people see online every day. Algorithms are power.

What do algorithms do? Imagine you are looking for something good to read. You remember your friend recently read the book Good Omens and liked it. You go to Amazon and search for [good omens]. What happens next?

Your casually dashed-off query immediately puts thousands of computers to work just to serve you. Search and ranker algorithms run their computer code in milliseconds, then the computers talk to each other about what they found for you. Working together, the computers comb through what are billions of potential options, filtering and sorting among them, to surface what you might be seeking.

And look at that! The book Good Omens is the second thing Amazon shows you, just below the recent TV series. That TV series looks fun too. Perhaps you’ll watch that later. For now, you click on the book.

As you look at the Good Omens book, more algorithms spring into action looking for more ways to help you. Perhaps there are similar books you might enjoy? Recommender algorithms follow their instructions, sorting through what millions of other customers found to show you what “customers who liked Good Omens also liked.” Maybe there is something that might be enticing, that gets you to click “buy now”.

And that’s why Jeff Bezos was on my office floor, laughing and bowing.

The percentage of Amazon sales that come through recommender algorithms is much higher than what you’d expect. In fact, it’s astounding. For years, about a third of Amazon’s sales came directly through content suggested by Amazon’s recommender algorithms.

Most of the rest of Amazon’s revenue comes through Amazon’s search and ranking algorithms. In total, nearly all of Amazon’s revenue comes from content suggested, surfaced, found, and recommended by algorithms. At a brick and mortar bookstore, a clerk might help you find the book you are looking for. At “Earth’s Biggest Bookstore”, algorithms find or suggest nearly everything people buy.

How algorithms are optimized, and what they show to people, is worth billions every year. Even small changes can make enormous differences.

Jeff Bezos celebrated that day because what algorithms show to people matters. How algorithms are tuned, improved, and optimized matters. It can change a company’s fortunes.

One of Amazon’s software engineers just found an improvement that made the recommender algorithms much more effective. So there Jeff was, bobbing up and down. Laughing. Celebrating. All because of how important recommender algorithms were to Amazon and its customers.

Tuesday, December 19, 2023

Book excerpt: Wisdom of the trustworthy

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of the crowds is the idea that combining the opinions of a lot of people will often get a very useful result, usually one that is better and more accurate than almost all of the opinions on their own.

Wisdom of the trustworthy is the same idea, just with the addition of only using the opinions of people that are provably real, independent people.

Discard all opinions from known shills, spammers, and propagandists. Then also discard all the opinions from accounts that may be shills, spammers, and propagandists, even if you do not know. Only use proven, trustworthy behavior data.

Wisdom of the trustworthy is a necessary reaction to online anonymity. Accounts are not people. In fact, many people have multiple accounts, often for reasons that have nothing to do with trying to manipulate algorithms. But the ease of creating accounts, and difficulty of verifying that accounts are actual people, means that we should be skeptical of all accounts.

On an internet where anyone can create and control hundreds or even thousands of accounts, trust should be hard to gain and easy to lose. New accounts should not be considered reliable. Over time, if an account behaves independently, interacts with other accounts in a normal manner, does not shill or otherwise coordinate with others, and doesn’t show robot behaviors such as liking or sharing posts at rates not possible for humans, it may start to be trusted.

The moment an account engages in anything that resembles coordinated shilling, all trust should be lost, and the account should go back to untrusted. Trust should be hard to gain and easy to lose.

Wisdom of the trustworthy makes manipulation much more costly, time-consuming, and inefficient for spammers, scammers, propagandists, and other adversaries. No longer would creating a bunch of accounts that are used by wisdom of the crowd algorithms be easy or cheap. Now, it would be a slow, cumbersome process, trying to get the accounts trusted, then having the accounts ignored again the moment they shilled anything.

Over a decade ago, a paper called “Wisdom of the Few” showed that recommender systems can do as well using only a much smaller number of carefully selected experts as they would using all available data on every user. The insight was that high quality data often outperforms badly noisy data, especially if the badly noisy data is not merely noisy but actually manipulated by adversaries. Less is more, the researchers argued, if less means only using provably good data for recommendations.

Big data has become a mantra in computer science. The more data, the better, it is thought, spurred on by an early result by Michele Banko and Eric Brill at Microsoft Research that showed that accuracy on a natural language task depended much more on how much training data was used than on which algorithm they used. Results just kept getting better and better the more data they used. As others found similar things in search, recommender systems, and other machine learning tasks, big data became popular.

But big data cannot mean corrupted data. Data that has random noise is usually not a problem; averaging over large amounts of the data usually eliminates the issue. But data that has been purposely skewed by an adversary with a different agenda is very much a problem.

In search, web spam has been a problem since the beginning, including widespread manipulation of the PageRank algorithm first invented by the Google founders. PageRank worked by counting the links between web pages as votes. People created links between pages on the Web, and each of these links could be viewed as votes that some person thought that page was interesting. By recursively looking at who linked to who and who linked to that, the idea was wisdom could be found in the crowd of people who created those links.

It wasn’t long until people started creating lots of links and lots of pages that linked to a page that they wanted amplified by the search engines. This was the beginning of link farms, massive collections of pages that effectively voted that they were super interesting and should be shown high up in the search results.

The solution to web spam was TrustRank. TrustRank starts with a small number of trusted websites, then starts trusting only sites they link to. Untrusted or unknown websites are largely ignored when ranking. Only the votes from trusted websites count to determine what search results to show to people. A related idea, Anti-TrustRank, starts with all the known spammers, shills, and other bad actors, and marks them and everyone they associate with as untrusted.

E-mail spam had a similar solution. Nowadays, trusted senders can send e-mail, which includes major companies and people you have interacted with in the past. Unknown senders are viewed skeptically, sometimes allowed into your inbox, sometimes not, depending on what they have done in the past and what others seem to think of the e-mails they send, but often will go straight to spam. And untrusted senders, you never see their e-mails at all, as they are blocked or sent straight to spam without any risk of being featured in your inbox.

The problem on social media is severe. Professor Fil Menczer described how “Social media users have in past years become victims of manipulation by astroturf causes, trolling and misinformation. Abuse is facilitated by social bots and coordinated networks that create the appearance of human crowds.” The core problem is bad actors creating a fake crowd. They are pretending to be many people and effectively stuffing the ballot box of algorithmic popularity.

For wisdom of the crowd algorithms such as rankers and recommenders, only use proven trustworthy behavior data. Big data is useless if the data is manipulated. It should not be possible to use fake and controlled accounts to start propaganda trending and picked up by rankers and recommenders.

To avoid manipulation, any behavior data that may involve coordination should be discarded and not used by the algorithms. Discard all unknown or known bad data. Keep only known good data. Shilling will kill the credibility and usefulness of a recommender.

It should be wisdom of independent reliable people. Do not try to find wisdom in big corrupted crowds full of shills.

There is no free speech issue with only considering authentic data when computing algorithmic amplification. CNN analyst and Yale lecturer Asha Rangappa described it well: “Social media platforms like Twitter are nothing like a real public square ... in the real public square ... all of the participants can only represent one individual – themselves. They can’t duplicate themselves, or create a fake audience for the speaker to make the speech seem more popular than it really is.” However, on Twitter, “the prevalence of bots, combined with the amplification features on the platform, can artificially inflate the ‘value’ of an idea ... Unfortunately, this means that mis- and disinformation occupy the largest ‘market share’ on the platform.”

Professor David Kirsch echoed this concern that those who create fake crowds are able to manipulate people and algorithms on social media. Referring to the power of fake crowds to amplify, Kirsch said, “It matters who stands in the public square and has a big megaphone they’re holding, [it’s] the juice they’re able to amplify their statements with.”

Unfortunately, on the internet, amplified disinformation is particularly effective. For example, bad actors can use a few dozen very active controlled accounts to create the appearance of unified opinion in a discussion forum, shouting down anyone who disagrees and controlling the conversation. Combined with well-timed likes and shares from multiple controlled accounts, they can overwhelm organic activity.

Spammers can make their own irrelevant content trend. These bad actors create manufactured popularity and consensus with all their fake accounts.

Rangappa suggested a solution based on a similar idea to prohibiting collusion in the free market: “In the securities market, for example, we prohibit insider trading and some forms of coordinated activity because we believe that the true value of a company can only be reflected if its investors are competing on a relatively level playing field. Similarly, to approximate a real marketplace of ideas, Twitter has to ensure that ideas can compete fairly, and that their popularity represents their true value.”

In the Facebook Papers published in The Atlantic, Adrienne LaFrance talked about the problem at Facebook, saying the company “knows that repeat offenders are disproportionately responsible for spreading misinformation ... It could tweak its algorithm to prevent widespread distribution of harmful content ... It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly ... Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors — and treat reach as a privilege, not a right ... It could do all of these things.”

Former Facebook data scientist Jeff Allen similarly proposed, “Facebook could define anonymous and unoriginal content as ‘low quality’, build a system to evaluate content quality, and incorporate those quality scores into their final ranking ‘value model’.” Allen went on to add in ideas similar to TrustRank, saying that accounts that produce high quality content should be trusted, accounts that spam and shill should be untrusted, and then trust could be part of ranking.

Allen was concerned about the current state of Facebook, and warned of the long-term retention and growth problems Facebook and Twitter later experienced: “The top performing content is dominated by spammy, anonymous, and unoriginal content ... The platform is easily exploited. And while the platform is vulnerable, we should expect exploitative actors to be heavily [exploiting] it.”

Bad actors running rampant is not inevitable. As EFF and Stack Overflow board member Anil Dash said, fake accounts and shilling is “endemic to networks that are thoughtless about amplification and incentives. Intentionally designed platforms have these issues, but at a manageable scale.”

Just as web spam and email spam were reduced to almost nothing by carefully considering how to make them less effective, and just as many community sites like Stack Overflow and Medium are able to counter spam and hate, Facebook and other social media websites can too.

When algorithms are manipulated everyone but the spammers lose. Users lose because the quality of the content is worse, with shilled scams and misinformation appearing above content that is actually popular and interesting. The business loses because its users are less satisfied, eventually causing retention problems and hurting long-term revenue.

The idea of only using trustworthy accounts in wisdom of the crowd algorithms has already been proven to work. Similar ideas are widely used already for reducing web and email spam to nuisance levels. Wisdom of the trustworthy should be used wherever and whenever there are problems with manipulation of wisdom of the crowd algorithms.

Trust should not be easy to get. New accounts are easy for bad actors to create, so should be viewed with skepticism. Unknown or untrusted accounts should have their content downranked, and their actions should be mostly ignored by ranker and recommender algorithms. If social media companies did this, then shilling, spamming, and propaganda by bad actors would pay off far less often, making them too costly for many efforts to continue.

In the short-term, with the wrong metrics, it looks great to allow bots, fake accounts, fake crowds, and shilling. Engagement numbers go up, and you see many new accounts. But it’s not real. These aren’t real people who use your product, helpfully interact with other people, and buy things from your advertisers. Allowing untrustworthy accounts and fake crowds hurts customers, advertisers, and the business in the long-term.

Only trustworthy accounts should be amplified by algorithms. And trust should be hard to get and easy to lose.

Wednesday, December 13, 2023

Extended book excerpt: Computational propaganda

(This is a long excerpt about manipulation of algorithms by adversaries from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Inauthentic activity is designed to manipulate social media. It exists because there is a strong incentive to manipulate wisdom of the crowd algorithms. If someone can get recommended by algorithms, they can get a lot of free attention because they now will be the first thing many people see.

For adversaries, a successful manipulation is like a free advertisement, seen by thousands or even millions. On Facebook, Twitter, YouTube, Amazon, Google, and most other sites on the internet, adversaries have a very strong incentive to manipulate these company’s algorithms.

For some governments, political parties, and organizations, the incentive to manipulate goes beyond merely shilling some content for the equivalent of free advertising. These adversaries engage thousands of controlled accounts over long periods of time in disinformation campaigns.

The goal is to promote a point of view, shut down those promoting other points of view, obfuscate unfavorable news and facts, and sometimes even create whole other realities that millions of people believe are true.

These efforts by major adversaries are known as “computational propaganda.” Computational propaganda unites many terms — “information operations,” “information warfare,” “influence operations,” “online astroturfing,” “cybertufing,” “disinformation campaigns,” and many others — and is defined as “the use of automation and algorithms in the manipulation of public opinion.”

More simply, computational propaganda is an attempt to give “the illusion of popularity” by using a lot of fake accounts and fake followers and make something look far more popular than it actually is. It creates “manufactured consensus,” the appearance that many people think something is interesting, true, and important when, in fact, it is not.

It is propaganda by stuffing the ballot box. The trending algorithm on Twitter and the recommendation engine on Facebook look at what people are sharing, liking, and commenting on as votes, votes for what is interesting and important. But “fringe groups that were five or 10 people could make it look like they were 10 or 20,000 people,” reported PBS’ The Facebook Dilemma. “A lot of people sort of laughed about how easy it was for them to manipulate social media.” They run many “accounts on Facebook at any given time and use them to manipulate people.”

This is bad enough when it is done for profit, to amplify a scam or just to try to sell more of some product. But when governments get involved, especially autocratic governments, reality itself can start to warp under sustained efforts to confuse what is real. “It’s anti-information,” said historian Heather Cox Richardson. Democracies rely on a common understanding of facts, of what is true, to function. If you can get even a few people to believe something that is not true, it changes how people vote, and can even “alter democracy.”

The scale of computational propaganda is what makes it so dangerous. Large organizations and state-sponsored actors are able to sustain thousands of controlled accounts pounding out the same message over long periods of time. They can watch how many real people react to what they do, learn what is working and what is failing to gain traction, and then adapt, increasing the most successful propaganda.

The scale is what creates computational propaganda from misinformation and disinformation. Stanford Internet Observatory’s Renée DiResta provided an excellent explanation in The Yale Review: “Misinformation and disinformation are both, at their core, misleading or inaccurate information; what separates them is intent. Misinformation is the inadvertent sharing of false information; the sharer didn’t intend to mislead people and genuinely believed the story. Disinformation, by contrast, is the deliberate creation and sharing of information known to be false. It’s a malign narrative that is spread deliberately, with the explicit aim of causing confusion or leading the recipient to believe a lie. Computational propaganda is a suite of tools or tactics used in modern disinformation campaigns that take place online. These include automated social media accounts that spread the message and the algorithmic gaming of social media platforms to disseminate it. These tools facilitate the disinformation campaign’s ultimate goal — media manipulation that pushes the false information into mass awareness.”

The goal of computational propaganda is to bend reality, to make millions believe something that is not true is true. DiResta warned: “As Lenin purportedly put it, ‘A lie told often enough becomes the truth.’ In the era of computational propaganda, we can update that aphorism: ‘If you make it trend, you make it true’”

In recent years, Russia was particularly effective at computational propaganda. Adversaries created fake media organizations that looked real, created fake accounts with profiles and personas that looked real, and developed groups and communities to the point they had hundreds of thousands of followers. Russia was “building influence over a period of years and using it to manipulate and exploit existing political and societal divisions,” DiResta wrote in the New York Times.

The scale of this effort was remarkable. “About 400,000 bots [were] engaged in the political discussion about the [US] Presidential election, responsible for roughly 3.8 million tweets, about one-fifth of the entire conversation,” said USC researchers.

Only later was the damage at all understood. In the book Zucked, Roger McNamee summarized the findings: “Facebook disclosed that 126 million users had been exposed to Russian interference, as well as 20 million users on Instagram ... The user number represents more than one-third of the US population, but that grossly understates its impact. The Russians did not reach a random set of 126 million people on Facebook. Their efforts were highly targeted. On the one hand, they had targeted people likely to vote for Trump with motivating messages. On the other, they identified subpopulations of likely Democratic voters who might be discouraged from voting ... In an election where only 137 million people voted, a campaign that targeted 126 million eligible voters almost certainly had an impact.”

These efforts were highly targeted, trying to pick out parts of the US electorate that might be susceptible to their propaganda. The adversaries worked over a long period of time, adapting as they discovered what was getting traction.

By late 2019, as reported by MIT Technology Review, “all 15 of the top pages targeting Christian Americans, 10 of the top 15 Facebook pages targeting Black Americans, and four of the top 12 Facebook pages targeting Native Americans were being run by …. Eastern European troll farms.”

These pages “reached 140 million US users monthly.” They achieved this extraordinary reach not by people seeking them out on their own, but by manipulating Facebook’s “engagement-hungry algorithm.” These groups were so large and so popular because “Facebook’s content recommendation system had pushed [them] into their news feeds.” Facebook’s optimization process for their algorithms was giving these inauthentic actors massive reach for their propaganda.

As Facebook data scientists warned inside of the company, “Instead of users choosing to receive content from these actors, [Facebook] is choosing to give them an enormous reach.” Real news, trustworthy information from reliable sources, took a back seat to this content. Facebook was amplifying these troll farms. The computational propaganda worked.

The computational propaganda was not limited to Facebook. The efforts spanned many platforms, trying the same tricks everywhere, looking for flaws to exploit and ways to extend their reach. The New York Times reported that the Russian “Internet Research Agency spread its messages not only via Facebook, Instagram and Twitter ... but also on YouTube, Reddit, Tumblr, Pinterest, Vine and Google+” and others. Wherever they were most successful, they would do more. They went wherever it was easiest and most efficient to spread their false message to a mass audience.

It is tempting to question how so many people could fall for this manipulation. How could over a hundred million Americans, and hundreds of millions of people around the world, see propaganda and believe it?

But this propaganda did not obviously look like Russian propaganda. The adversaries would impersonate Americans using fake accounts with descriptions that appeared to be authentic on casual inspection. Most people would have no idea they were reading a post or joining a Facebook Group that was created by a troll farm.

Instead “they would be attracted to an idea — whether it was guns or immigration or whatever — and once in the Group, they would be exposed to a steady flow of posts designed to provoke outrage or fear,” said Roger McNamee in Zucked. “For those who engaged frequently with the Group, the effect would be to make beliefs more rigid and more extreme. The Group would create a filter bubble, where the troll, the bots, and the other members would coalesce around an idea floated by the troll.”

The propaganda was carefully constructed, using amusing memes and emotion-laden posts to lure people in, then using manufactured consensus through multiple controlled accounts to direct and control what people saw afterwards.

Directing and controlling discussions only requires a small number of accounts if well-timed and coordinated. Most people reading a group are passive. Most people are not actively posting to the group. And far more people read than like, comment, or reshare.

Especially if adversaries do the timing well to get the first few comments and likes, then “as few as 1 to 2 percent of a group can steer the conversation if they are well- coordinated. That means a human troll with a small army of digital bots— software robots— can control a large, emotionally engaged Group.” If any real people start to argue or point out that something is not true, they can be drowned out by the controlled accounts simultaneously slamming them in the comments, creating an illusion of consensus and keeping the filter bubble intact.

This spanned the internet, on every platform and across seemingly-legitimate websites. Adversaries tried many things to see what worked. When something gained traction, they would “post the story simultaneously on an army of Twitter accounts” along with their controlled accounts saying, “read the story that the mainstream media doesn’t want you to know about.” If any real journalist eventually wrote about the story, “The army of Twitter accounts— which includes a huge number of bots— tweets and retweets the legitimate story, amplifying the signal dramatically. Once a story is trending, other news outlets are almost certain to pick it up.”

In the most successful cases, what starts as propaganda becomes misinformation, with actual American citizens unwittingly echoing Russian propaganda, now mistakenly believing a constructed reality was actually real.

By no means was this limited to only within the United States or only by Russians. Many large scale adversaries, including governments, political campaigns, multinational corporations, and organizations, are engaging in computational propaganda. What they have in common is using thousands of fake, hacked, controlled, or paid accounts to rapidly create messages on social media and the internet. They create manufactured consensus around their message and flood confusion around what is real and what is not. They have been seen “distorting political discourse, including in Albania, Mexico, Argentina, Italy, the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia,” wrote the Guardian.

Computational propaganda is everywhere in the world. It “has become a regular tool of statecraft,” said Princeton Professor Jacob Shapiro, “with at least 51 different countries targeted by government-led online influence efforts” in the last decade.

An example in India is instructive. In the 2019 general election in India, adversaries used “hundreds of WhatsApp groups,” fake accounts, hacked and hijacked accounts, and “Tek Fog, a highly sophisticated app” to centrally control activity on social media. In a published paper, researchers wrote that adversaries “were highly effective at producing lasting Twitter trends with a relatively small number of participants.” This computational propaganda amplified “right-wing propaganda … making extremist narratives and political campaigns appear more popular than they actually are.” They were remarkably effective: “A group of public and private actors working together to subvert public discourse in the world’s largest democracy by driving inauthentic trends and hijacking conversations across almost all major social media platforms.”

Another recent example was in Canada, the so-called “Siege of Ottawa.” In the Guardian, Arwa Mahdawi wrote about how it came about: “It’s an astroturfed movement – one that creates an impression of widespread grassroots support where little exists – funded by a global network of highly organised far-right groups and amplified by Facebook ... Thanks to the wonders of modern technology, fringe groups can have an outsize influence ... [using] troll farms: organised groups that weaponise social media to spread misinformation.”

Computational propaganda “threatens democracies worldwide.” It has been “weaponized around the world,” said MIT Professor Sinan Aral in the book The Hype Machine. In the 2018 general elections in Sweden, a third of politics-related hashtagged tweets “were from fake news sources.” In the 2018 national elections in Brazil, “56 percent of the fifty most widely shared images on [popular WhatsApp] chat groups were misleading, and only 8 percent were fully truthful.” In the 2019 elections in India, “64 percent of Indians encountered fake news online.” In the Philippines, there was a massive propaganda effort against Maria Ressa, a journalist “working to expose corruption and a Time Person of the Year in 2018.” Every democracy around the world is seeing adversaries using computational propaganda.

The scale is what makes computational propaganda so concerning. The actors behind computational propaganda are often well-funded with considerable resources to bring to bear to achieve their aims.

Remarkably, there is now enough money involved that there are private companies “offering disinformation-for-hire services.” Computational propaganda “has become more professionalised and is now produced on an industrial scale.” It is everywhere in the world. “In 61 countries, we found evidence of political parties or politicians running for office who have used the tools and techniques of computational propaganda,” said researchers at University of Oxford. The way they work is always the same. “Automated accounts are often used to amplify certain narratives while drowning out others ... in order to game the automated systems social media companies use.” It is spreading propaganda using manufactured consensus at industrial scale.

Also concerning is that computational propaganda can target the just most vulnerable and the most susceptible and still achieve its aims. In a democracy, the difference between winning an election and losing is often just a few percentage points.

To change the results of an election, you don’t have to influence everyone. The target of computational propaganda is usually “only 10-20% of the population.” Swaying even a fraction of this audience by convincing them to vote in a particular way or discouraging them from voting at all “can have a resounding impact,” shifting all the close elections favorably, and leading to control of a closely-contested government.

To address the worldwide problem of computational propaganda, it is important to understand why it works. Part of why computational propaganda works is the story of why propaganda has worked throughout history. Computational propaganda floods what people see with a particular message, creating an illusion of consensus while repeating the same false message over and over again.

This feeds the common belief fallacy, even if the number of controlled accounts is relatively small, by creating the appearance that everyone believes this false message to be true. It creates a firehose of falsehood, flooding people with the false message, creating confusion about what is true or not, and drowning out all other messages. And the constant repetition, seeing the message over and over, fools our minds using the illusionary truth effect, which tends to make us believe things we have seen many times before, “even if the idea isn’t plausible and even if [we] know better.”

As Wharton Professor Ethan Mollick wrote, “The Illusionary Truth Effect supercharges propaganda on social media. If you see something repeated enough times, it seems more true.” Professor Mollick went on to say that studies found it works on the vast majority of people even when the information isn’t plausible and merely five repetitions were enough to start to make false statements seem true.

The other part of why computational propaganda works is algorithmic amplification by social media algorithms. Wisdom of the crowd algorithms, which are used in search, trending, and recommendations, work by counting votes. They look for what is popular, or what seems to be interesting to people like you, by looking at what people seemed to have enjoyed in the recent past.

When the algorithms look for what people are enjoying, these algorithms assume each person is a real person and each person is acting independently. When adversaries create many fake accounts or coordinate between many controlled accounts, they are effectively voting many times, fooling the algorithms with an illusion of consensus.

What the algorithm thought was popular and interesting turns out to be shilled. The social media post is not really popular or interesting, but the computational propaganda effort made it look to the algorithm that it is. And so the algorithm amplifies the propaganda, inappropriately showing it to many more people, and making the problem far worse.

Both people using social media and the algorithms picking what people see on social media are falling victim to the same technique, manufactured consensus, the propagandist creating “illusory notions of ... popularity because of this same automated inflation of the numbers.” It is adversaries using bots and coordinated accounts to mimic real users.

“They can drive up the number of likes, re-messages, or comments associated with a person or idea,” wrote the authors of Social Media and Democracy. “Researchers have catalogued political bot use in massively bolstering the social media metrics.”

The fact that they are only mimicking real users is important to addressing the problem. They are not real users, and they don’t behave like real users.

For example, when the QAnon conspiracy theory was growing rapidly on Facebook, it grew using “minimally connected bulk group invites. One member sent over 377,000 group invites in less than 5 months.” There were very few people responsible. According to reporter David Gilbert, there are “a relatively few number of actors creating a large percentage of the content.” He said a “small group of users has been able to hijack the platform.”

To shill and coordinate between many accounts pushing propaganda, adversaries have to behave in ways that are not human. Bots and other accounts that are controlled by just a few people all “pounce on fake news in the first few seconds after it’s published, and they retweet it broadly.” The initial spreaders of the propaganda “are much more likely to be bots than humans” and often will be the same accounts, superspreaders of propaganda, acting over and over again.

Former Facebook data scientist Sophie Zhang talked about this in a Facebook internal memo, reported by BuzzFeed: “thousands of inauthentic assets ... coordinated manipulation ... network[s] of more than a thousand actors working to influence ... The truth was, we simply didn’t care enough to stop them.” Despairing about the impact of computational propaganda on people around the world, Zhang went on to lament, “I have blood on my hands.”

Why do countries, and especially authoritarian regimes, create and promote propaganda? Why do they bother?

The authors of the book Spin Dictators write that, in recent years, because of globalization, post-industrial development, and technology changes, authoritarian regimes have “become less bellicose and more focused on subtle manipulation. They seek to influence global opinion, while co-opting and corrupting Western elites.”

Much of this is simply that it in recent decades has become cheaper and more effective to maintain power through manipulation and propaganda, in part due to lower costs on communication such as disinformation campaigns on social media, in part due to the economic benefits of openness that raise the costs of use of violence.

“Rather than intimidating citizens into submission, they use deception to win the people over.” Nowadays, propaganda is easier and cheaper. “Their first line of defense, when the truth is against them, is to distort it. They manipulate information ... When the facts are good, they take credit for them; when bad, they have the media obscure them when possible and provide excuses when not. Poor performance is the fault of external conditions or enemies ... When this works, spin dictators are loved rather than feared.”

Nowadays, it is cheaper to become loved than feared. “Spin dictators manipulate information to boost their popularity with the general public and use that popularity to consolidate political control, all while pretending to be democratic.”

While not all manipulation of wisdom of the crowd algorithms is state actors, adversarial states are a big problem: “The Internet allows for low-cost, selective censorship that filters information flows to different groups.” Propaganda online is cheap. “Social networks can be hijacked to disseminate sophisticated propaganda, with pitches tailored to specific audiences and the source concealed to increase credibility. Spin dictators can mobilize trolls and hackers ... a sophisticated and constantly evolving tool kit of online tactics.”

Unfortunately, internet “companies are vulnerable to losing lucrative markets,” so they are not always quick to act when they discover countries manipulating their rankers and recommender algorithms; authoritarian governments often play to this fear by threatening retaliation or loss of future business in the country.

Because “the algorithms that decide what goes viral” are vulnerable to shilling, it is also easy for “spin dictators use propaganda to spread cynicism and division.” And “if Western publics doubt democracy and distrust their leaders, those leaders will be less apt to launch democratic crusades around the globe.” Moreover, they can spread the message that “U.S.-style democracy leads to polarization and conflict” and corruption. This reduces the threats to an authoritarian leader and reinforces their own popularity.

Because the manipulation is all adversaries trying to increase their visibility, downranking or removing accounts involved in computational propaganda has little business risk. New accounts and any account involved in shilling, coordination, or propaganda could largely be ignored for the purpose of algorithmic amplification, and repeat offenders could be banned entirely.

Computational propaganda exists because it is cost effective to do at large scale. Increasing the cost of propaganda reaching millions of people may be enough to vastly reduce its impact. As Sinan Aral writes in the book The Hype Machine, “We need to cut off the financial returns to spreading misinformation and reduce the economic incentive to create it in the first place.”

While human susceptibility to propaganda is difficult to solve, on the internet today, a big part of the problem of computational propaganda comes down to how easy it is for adversaries to manipulate wisdom of the crowd algorithms and see their propaganda cheaply and efficiently amplified by algorithms.

Will Oremus blamed recommendation and other algorithms in the Washington Post making it far too easy for the bad guys. “The problem of misinformation on social media has less to do with what gets said by users than what gets amplified — that is, shown widely to others — by platforms’ recommendation software,” he said. Raising the costs to manipulating the recommendation engine is key to reducing the effectiveness of computational propaganda.

Wisdom of the crowds depends on the crowd consisting of independent voices voting independently. When that assumption is violated, adversaries can force the algorithms to recommend whatever they want. Computational propaganda uses a combination of bots and many controlled accounts, along with so-called “useful idiot” shills, to efficiently and effectively manipulate trending, ranker, and recommender algorithms.

Allowing their platforms to be manipulated by computational propaganda makes the experience on the internet worse. University of Oxford researchers found that “globally, disinformation is the single most important fear of internet and social media use and more than half (53%) of regular internet users are concerned about disinformation [and] almost three quarters (71%) of internet users are worried about a mixture of threats, including online disinformation, fraud and harassment.” At least in the long-term, it is in everyone’s interest to reduce computational propaganda.

When adversaries have their bots and coordinated accounts like, share, and post, none of that is authentic activity. None of that shows that people actually like the content. None of that content is actually popular nor interesting. It is all manipulation of the algorithms and only serves to make relevance and the experience worse.

Sunday, December 10, 2023

Book excerpt: The rise and fall of wisdom of the crowds

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of the crowds is the epiphany that combining many people's opinions is useful, often more useful than expert opinions.

Customer reviews summarize what thousands of people think of movies, books, and everything else you might want to buy. Customer reviews can be really useful for knowing if you want to buy something you've never tried before.

When you search the internet, what thousands of people clicked on before you helps determine what you see. Most of the websites on the internet are useless or scams; wisdom of the crowd filters all that out and helps you find what you need.

When you read the news online, you see news first that other people think is interesting. What people click determines what information you see about what's going on in the world.

Algorithms on the internet take the wisdom of the crowds to a gargantuan scale. Algorithms process all the data, summarizing it all down, until you get millions of people helping millions of people find what they need.

It sounds great, right? And it is. But once you use wisdom of the crowds, scammers come in. They see dollar signs in fooling those algorithms. Scammers profit from faking crowds.

When manipulated, wisdom of the crowds can promote scams, misinformation, and propaganda. Spammers clog up search engines until we can't see anything but scams. Online retailers are filled with bogus positive customer reviews of counterfeit and fraudulent items. The bad guys astroturf everything using fake crowds. Foreign operatives are able to flood the zone on social media with propaganda using thousands of fake accounts.

What we need is an internet that works for us. We need an internet that is useful and helpful, where we can find what we need without distractions and scams. Wisdom of the crowds and the algorithms that use wisdom of the crowds are the key to getting us there. But wisdom of the crowds can fail.

It's tricky to get right. Good intentions can lead to destructive outcomes. When executives tell their teams to optimize for clicks, they discover far too late that going down that path optimizes for scams and hate. When teams use big data, they're trying to make their algorithms work better, but they often end up sweeping up manipulated data that skews their results toward crap. Understanding why wisdom of the crowds fails and how to fix it is the key to getting us the internet we want.

The internet has come a long way. In the mid-1990s, it was just a few computer geeks. Nowadays, everyone in the world is online. There have been hard lessons learned along the way. These are the stories of unintended consequences.

Good intentioned efforts to tell teams to increase engagement caused misinformation and spam. Experimentation and A/B testing helped some teams help customers, but also accidentally sent some teams down dark paths of harming customers. Attempts to improve algorthms easily can go terribly wrong.

The internet has grown massively. During all of that growth, many internet companies struggled with figuring out how to make a real business. At first, Google had no revenue and no idea how to make money off web search. At first, Amazon had no profits and it was unclear if it ever would.

Almost always, people at tech companies had good intentions. We were scrambling to build the right thing. What we ended up building was not always the right thing. The surprising reason for this failure is what gets built depends not so much on the technology but the incentives people have.

Friday, December 08, 2023

Book excerpt: Manipulating likes, comments, shares, and follows

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

“The systems are phenomenally easy to game,” explained Stanford Internet Observatory’s Renee DiResta.

The fundamental idea behind the algorithms used by social media is that “popular content, as defined by the crowd” should rise to the top. But “the crowd doesn’t have to be real people.”

In fact, adversaries can get these algorithms to feature whatever content they want. The process is easy and cheap, just pretend to be many people: “Bots and sockpuppets can be used to manipulate conversations, or to create the illusion of a mass groundswell of grassroots activity, with minimal effort.”

Whatever they want — whether it is propaganda, scams, or just flooding-the-zone with disparate and conflicting misinformation — can appear to be popular, which trending, ranker, and recommender algorithms will then dutifully amplify.

“The content need not be true or accurate,” DiResta notes. All this requires is a well-motivated small group of individuals pretending to be many people. “Disinformation-campaign material is spread via mass coordinated action, supplemented by bot networks and sockpuppets (fake people).”

Bad actors can amplify propaganda on a massive scale, reaching millions, cheaply and easily, from anywhere in the world. “Anyone who can gather enough momentum from sharing, likes, retweets, and other message-amplification features can spread a message across the platforms’ large standing audiences for free,” DiResta continued in an article for Yale Review titled "Computational Propaganda": “Leveraging automated accounts or fake personas to spread a message and start it trending creates the illusion that large numbers of people feel a certain way about a topic. This is sometimes called ‘manufactured consensus’.”

Another name for it is astroturf. Astroturf is feigning popularity by using a fake crowd of shills. It's not authentic. Astroturf creates the illusion of popularity.

There are even businesses set up to provide the necessary shilling, hordes of fake people on social media available on demand to like, share, and promote whatever you may want. As described by Sarah Frier in the book No Filter: “If you searched [get Instagram followers] on Google, dozens of small faceless firms offered to make fame and riches more accessible, for a fee. For a few hundred dollars, you could buy thousands of followers, and even dictate exactly what these accounts were supposed to say in your comments.”

Sarah Frier described the process in more detail. “The spammers ... got shrewder, working to make their robots look more human, and in some cases paying networks of actual humans to like and comment for clients.” They found “dozens of firms” offering these services of “following and commenting” to make content falsely appear to be popular and thereby get free amplification by the platforms wisdom of the crowd algorithms. “It was quite easy to make more seemingly real people.”

In addition to creating fake people by the thousands, it is easy to find real people who are willing to be paid to shill, some of which would even “hand over the password credentials” for their account, allowing the propagandists to use their account to shill whenever they wished. For example, there were sites where bad actors could “purchase followers and increase engagement, like Kicksta, Instazood, and AiGrow. Many are still running today.” And in discussion groups, it was easy to recruit people who, for some compensation, “would quickly like and comment on the content.”

Bad actors manipulate likes, comments, shares, and follows because it works. When wisdom of the crowd algorithms look for what is popular, they pick up all these manipulated likes and shares, thinking they are real people acting independently. When the algorithms feature manipulated content, bad actors get what is effectively free advertising, the coveted top spots on the page, seen by millions of real people. This visibility, this amplification, can be used for many purposes, including foreign state-sponsored propaganda or scams trying to swindle.

Professor Fil Menczer studies misinformation and disinformation on social media. In our interview, he pointed out that it is not just wisdom of the crowd algorithms that fixate on popularity, but a “cognitive/social” vulnerability that “we tend to pay attention to items that appear popular … because we use the attention of other people as a signal of importance.”

Menczer explained: “It’s an instinct that has evolved for good reason: if we see everyone running we should run as well, even if we do not know why.” Generally, it does often work to look at what other people are doing. “We believe the crowd is wise, because we intrinsically assume the individuals in the crowd act independently, so that the probability of everyone being wrong is very low.”

But this is subject to manipulation, especially online on social media “because one entity can create the appearance of many people paying attention to some item by having inauthentic/coordinated accounts share that item.” That is, if a few people can pretend to be many people, they can create the appearance of a popular trend, and fool our instinct to follow the crowd.

To make matters worse, there often can be a vicious cycle where some people are manipulated by bad actors, and then their attention, their likes and shares, is “further amplified by algorithms.” Often, it is enough to merely start some shilled content trending, because “news feed ranking algorithms use popularity/engagement signals to determine what is interesting/engaging and then promote this content by ranking it higher on people’s feeds.”

Adversaries manipulating the algorithms can be clever and patient, sometimes building up their controlled accounts over a long period of time. One low cost method of making a fake account look real and useful is to steal viral content and share it as your own.

In an article titled “Those Cute Cats Online? They Help Spread Misinformation,” New York Times reporters described one method of how new accounts manage to quickly gain large numbers of followers. The technique involves reposting popular content, such as memes that previously went viral, or cute pictures of animals: “Sometimes, following a feed of cute animals on Facebook unknowingly signs [people] up” for misinformation. “Engagement bait helped misinformation actors generate clicks on their pages, which can make them more prominent in users’ feeds in the future.”

Controlling many seemingly real accounts, especially accounts that have real people following them to see memes and cute pictures of animals, allows bad actors to “act in a coordinated fashion to increase influence.” The goal, according to researchers at Indiana University, is to create a network of controlled shills, many of which might be unwitting human participants, that are “highly coordinated, persistent, homogeneous, and fully focused on amplifying” scams and propaganda.

This is not costless for social media companies. Not only are people directly misled, and even sometimes pulled into conspiracy theories and scams, but amplifying manipulated content including propaganda rather than genuinely popular content will “negatively affect the online experience of ordinary social media users” and “lower the overall quality of information” on the website. Degradation of the quality of the experience can be hard for companies to see, only eventually showing up in poor retention and user growth when customers get fed up and leave in disgust.

Allowing fake accounts, manipulation of likes and shares, and shilling of scams and propaganda may hurt the business in the long-term, but, in the short-term, it can mean advertising revenue. As Karen Hao reported in MIT Technology Review, “Facebook isn’t just amplifying misinformation. The company is also funding it.” While some adversaries manipulate wisdom of the crowd algorithms in order to push propaganda, some bad actors are in it for the money.

Social media companies allowing this type of manipulation does generate revenue, but it also reduces the quality of the experience, filling the site with unoriginal content, republished memes, and scams. Hao detailed how it works: “Financially motivated spammers are agnostic about the content they publish. They go wherever the clicks and money are, letting Facebook’s news feed algorithm dictate which topics they’ll cover next ... On an average day, a financially motivated clickbait site might be populated with ... predominantly plagiarized ... celebrity news, cute animals, or highly emotional stories—all reliable drivers of traffic. Then, when political turmoil strikes, they drift toward hyperpartisan news, misinformation, and outrage bait because it gets more engagement ... For clickbait farms, getting into the monetization programs is the first step, but how much they cash in depends on how far Facebook’s content-recommendation systems boost their articles.”

The problem is that this works. Adversaries have a strong incentive to manipulate social media’s algorithms if it is easy and profitable.

But “they would not thrive, nor would they plagiarize such damaging content, if their shady tactics didn’t do so well on the platform,” Hao wrote. “One possible way Facebook could do this: by using what’s known as a graph-based authority measure to rank content. This would amplify higher-quality pages like news and media and diminish lower-quality pages like clickbait, reversing the current trend.” The idea is simple, that authoritative, trustworthy sources should be amplified more than untrustworthy or spammy sources.

Broadly this type of manipulation is spam, much like spam that technology companies have dealt with for years in email and on the Web. If social media spam was not cost-effective, it would not exist. Like with web spam and email spam, the key with social media spam is to make it less effective and less efficient. As Hao suggested, manipulating wisdom of the crowd algorithms could be made to be less profitable by viewing likes and shares from less trustworthy accounts with considerable skepticism. If the algorithms did not amplify this content as much, it would be much less lucrative to spammers.

Inside of Facebook, data scientists proposed something similar. Billy Perrigo at Time magazine reported that Facebook “employees had discovered that pages that spread unoriginal content, like stolen memes that they’d seen go viral elsewhere, contributed to just 19% of page-related views on the platform but 64% of misinformation views.” Facebook data scientists “proposed downranking these pages in News Feed ... The plan to downrank these pages had few visible downsides ... [and] could prevent all kinds of high-profile missteps.”

What the algorithms show is important. The algorithms can amplify a wide range of interesting and useful content that enhances discovery and keeps people on the platform.

Or the algorithms can amplify manipulated content, including hate speech, spam, scams, and misinformation. That might make people click now in outrage, or perhaps fool them for a while, but will cause people to leave in disguist eventually.