In this Insights@Questrom Q&A, Barbara Bickart, Senior Associate Dean of Graduate Programs and Associate Professor of Marketing, explains how influencers shape information and ideas on social media. Her insights reveal how persuasive tactics can lead to drastic events such as the riots that took place at the Capitol during President Joe Biden’s transition to the presidency.
Question 1: How do influencers, whether that be brands, politicians, or celebrities, guide decision-making on social media?
Effective influencers connect to their audiences and build relationships, which enhances trust. Once trust is established, the influencer’s behavior and recommendations are likely to impact the audience’s decisions. Establishing trust is not easy. There are several approaches to building trust. First, we tend to like people who are like us and are more persuaded by what they say and do. Therefore, influencers can emphasize their similarity to their audience, disclosing details of their lives or beliefs and opinions to which the audience can relate.
Second, some influencers, particularly celebrities, can capitalize on their attractiveness to influence. For example, Reese Witherspoon is likable both because she is attractive and because (through her Instagram posts) she seems to live a normal life, like her audience.
Finally, influencers can increase trust by demonstrating expertise or credibility on a topic. Effective influencers are often both likable and perceived as experts. When we like an influencer and perceive them to be knowledgeable on the topic, we are more likely to pay attention to their messages and accept their ideas at face value.
Question 2: How did social media influence the riots and violence that broke out at the Capitol by people protesting the election results?
In our day-to-day interactions, finding people who share our beliefs, opinions, and behaviors on political issues can be challenging. Our in-person networks are limited by who we work with and where we live. Social media allows us to find people with common interests, regardless of geographic location. For example, social media has facilitated connections among far-right extremists across the globe. Before the current social media platforms were established, we might have spread ideas in email chains or in phone calls, but those communications could only go to people we knew—e.g., those in our current networks. Social media, then, expands our networks, allowing us to find and share ideas with people who hold similar beliefs who we might not otherwise have met.
In addition, in the case of the Capitol riots, the dialogue on social media (catalyzed by the rhetoric used on both sides of the political divide) was emotional and arousing, which leads to increased sharing. In addition, because messages including false information tend to be more novel and inspire anxiety-provoking emotions, fake news is more likely to be shared. In other words, social media helped people find others with similar beliefs, as well as fueled the emotional reactions that drove people to both share these messages (including false information) and participate in the riots.
Question 3: How can politicians and tech companies prevent violence from escalating on social media in the future?
To prevent violence from escalating on social media, we need to do a better job of controlling the spread of messages that are clearly false or misleading. Tech companies could apply more stringent criteria in identifying and blocking messages that contain false information or the people that spread these messages. For example, in Fall 2020, to limit the spread of fake news, Twitter stopped showing (or “deindexed”) tweets that included the hashtag #BidenCrimeFamily. However, to circumvent this action, the operatives behind the message (including President Trump) adapted the hashtag to #BidenCrimeFamiily, with the deliberately misspelled word “family”. One approach for Twitter and other platforms would be to develop a broader net to catch these intentional misspellings and changes to hashtags to slow the spread of fake news.
There is also the likelihood that blocking politicians and other influencers who spread false information may just drive these people to new, less restrictive platforms that are less easily monitored. The tradeoff is a real and difficult one for the tech companies and for society.
To reduce polarization and promote an open perspective, politicians could tone down the rhetoric that inflames their bases, and this is true on both sides of the aisle. As mentioned above, people are more likely to spread emotional messages, particularly those that inspire arousing emotions such as anxiety, fear, or disgust. Of course, this kind of rhetoric can benefit politicians, mobilizing their base and encouraging actions both positive and negative.
- CBS (Boston): Workers are Calling it Quits by the Millions
- MIT Sloan Management Review: When We Don’t Own the Things We Use, Will We Still Love Them?
- BBC: Are Competitive Parents Compensating for Their Insecurities?
- China’s Tech Crackdown – Part 2: Reasserting Control Over a Growing Industry
- BU Today: Training for the Hybrid Office
- BBC: What We’re Getting Wrong About the ‘Great Resignation’
- Entergy New Orleans’ Failure to Protect its Power Grid and Customers Against Outages Following Hurricane Ida
- HR News: Workplace Diversity: 5 Benefits of an AI Recruitment Process
- This Week’s FDA Approval of E-Cigarettes: Throwing kids under the bus vs. abandoning adult smokers?
- China’s Tech Crackdown: Impact of Recent Regulatory Changes on the Tech Sector – Part 1