At first, it sounded like one of those too-perfect internet theories: Change one setting. Watch the algorithm fall at your feet.

Then women started doing it.

According to The Guardian, dozens of women joined a collective LinkedIn experiment after a series of viral posts suggested that changing their profile gender to “male” boosted visibility on the network. Some women reported spikes in reach and engagement of 400% or more. Others went further by rewriting their content in what they called “bro coded” language: Action-oriented, buzzword-heavy. Less warmth, more swagger.

LinkedIn denies that gender factors into ranking. Still, the anecdotes keep coming, and they raise a deeper question about power on professional platforms: If visibility can be altered by toggling identity, what does that say about the system that decides who gets seen?

How LinkedIn’s Gender Bias Went From Rumor to Group Experiment

Jane Frankland MBE framed the moment as a sudden collective realization. In her LinkedIn post titled LinkedIn’s Gender Bias Is Worse Than We Think — And the One Fix No One Is Talking About, she wrote: “Have you heard? There is a ghost in the machine, and right now the conversation is no longer a whisper. It’s a chorus of women and some media are belting out: LinkedIn’s algorithm appears to favour men.”

Frankland also described what women began documenting in screenshots. “Women have been sharing screenshots showing that when they switch their profile gender to ‘male,’ their reach doesn’t just improve, it skyrockets by 400%, 700%, even 1,300%+,” she wrote.

The mechanics vary from person to person, but the logic stays consistent. Women suspected the platform rewarded men more aggressively, whether through gender markers, language patterns, or the kind of professional “authority” LinkedIn tends to amplify.

The “Bro Boost” and the 400% Spike People Keep Talking About

One of the most cited examples came from Megan Cornish, a communications strategist for mental health tech companies.

According to The Guardian, Cornish started experimenting after her reach “decline[d] precipitously” earlier this year. First, she changed her LinkedIn gender setting to “male.” Then she asked ChatGPT to rewrite her profile in “male-coded” language, based on the idea that the platform favors “agentic” words such as “strategic” and “leader.” Finally, she rewrote older posts in similarly “agentic” language to compare performance.

According to the outlet, Cornish’s reach spiked “increasing 415% in the week after she trialled the changes.” Cornish later posted about it, and her post went viral, earning nearly 5,000 reactions.

But the most revealing part of the story was not the number. It was her reaction to becoming “bro Megan.”

“The problem was, she hated it,” The Guardian reported. Cornish said her posts used to be “soft.” “Concise and clever, but also like warm and human.” Then she saw the new version of herself, “like a white male swaggering around.”

She stopped after a week. “I was going to do it for a full month. But every day I did it, and things got better and better, I got madder and madder.”

LinkedIn’s Gender Bias, According to LinkedIn

LinkedIn’s public position stays firm: gender is not part of the ranking signal.

According to Inc.com, a LinkedIn spokesperson said: “Our algorithms do not use gender as a ranking signal, and changing gender on your profile does not affect how your content appears in search or feed. We regularly evaluate our systems across millions of posts, including checks for gender related disparities, alongside ongoing reviews and member feedback.”

LinkedIn acknowledged the trend but said it did not consider “demographic information” when deciding who gets attention. Instead, the company said “hundreds of signals” influence how a post performs.

Yet this is where the tension sits: LinkedIn can deny using gender, and it still may be true that users experience uneven visibility. Platform outcomes can reflect cultural bias even when the algorithm avoids explicit demographic inputs, especially if it rewards engagement patterns shaped by bias in the first place.

The Confounding Variables That Make This Harder to Prove

In her article for Inc.com, Suzanne Lucas points out that Cornish changed multiple variables at once. She changed her gender marker, yes. But she also ran posts through ChatGPT and asked the model to make her writing more “agentic.”

Lucas wrote that when someone asks for an “agentic style” rewrite, they generally want language that communicates “initiative, ownership, autonomy, confidence, and forward motion,” describing it as “the opposite of passive, hesitant, or reactive writing.” Lucas also notes ChatGPT’s response that this style is “generally considered a masculine style of writing.”

The article also raises other possibilities. Maybe the shift came from writing style rather than gender settings. Maybe posts built with ChatGPT align more easily with LinkedIn’s AI-driven content classification. Lucas quotes Tim Jura, LinkedIn’s VP of Engineering, who said: “At LinkedIn, we use AI, and more recently LLMs, to help identify the context of a post and determine whether the content is a helpful insight, a job opportunity, or a career milestone.”

Lucas also mentions the Hawthorne effect, which suggests that people behave differently when they know they’re being studied. She adds the simplest possibility, too: sometimes engagement spikes for no apparent reason. Sometimes posts flop. “It’s impossible to make a judgment based on this experiment,” she writes, “no matter how compelling the 400 percent result is.”

When Race Enters the Experiment, the Story Changes

The gender experiment does not land the same way for everyone.

According to The Guardian, Cass Cooper, a writer on technology and social media algorithms, changed her gender to “male,” then changed her race to “white.” Cooper is Black. The result, she said, was a decline in reach and engagement. The Guardian reported that other women of color discussed similar experiences.

Cooper’s takeaway did not center on LinkedIn alone. “We know there’s algorithmic bias, but it’s really hard to know how it works in a particular case or why,” she said. While she found the experiments “frustrating,” she also said they reflected society. “I’m not frustrated with the platform. I’m more frustrated with the lack of progress [in society].”

That point matters because it reframes the question. Even if the algorithm avoids demographic signals, bias still emerges through what people engage with, who gets treated as credible, and what “professionalism” looks like in practice.

LinkedIn’s Gender Bias and the Question Jane Frankland Keeps Asking

Frankland’s argument does not hinge on proving intent inside LinkedIn’s code. She focuses on design and incentives.

“Why does a B2B networking platform need to know its users’ gender at all?” she wrote.

She argues the field itself creates risk. “The very existence of this data field is a relic,” she wrote. “It enforces a binary that excludes non-binary and gender diverse individuals, and for women, it acts as a tag that the algorithm seems to use to throttle reach.”

Frankland also points to past examples of algorithmic discrimination. “Amazon’s recruitment AI system spent a decade teaching itself that men were the superior candidates,” she wrote. She also cited “a US study of 133 AI systems” that found “nearly half (44%) displayed gender bias” and “a quarter (25%) amplified both gender and racial discrimination.”

Her bottom line is blunt: “This is what algorithms do when you feed them gender,” she wrote. “They learn the wrong lesson.”

The Three Options Women Are Debating Right Now

Frankland outlines three strategies women have discussed. She presents them as competing paths, each with tradeoffs.

First, she describes “The Rebellion” as women staying “loudly and proudly visible,” crediting leaders such as Nishma Patel Robb, who argue that showing up with excellent content and supporting one another can retrain the system. Frankland writes: “It is change by influence, not disruption.”

Second, she outlines “The Trojan Horse Option,” in which women mass-switch their gender to “Man” to corrupt the dataset and break old patterns. She calls it disruptive but psychologically costly. “Women and non men should not have to misgender themselves to get fair reach,” she wrote.

Third, she describes “The Invisibility Cloak and Data Blackout,” where users select “Prefer not to say.” Frankland argues this only works if men join too. If women alone hide their gender, she warns that the algorithm will simply learn “Prefer Not to Say = Woman or non-men.”

She frames it as a data problem and a power problem. “This is not a symbolic gesture,” she wrote. “It’s data disruption.”

Another Viral LinkedIn Identity Experiment Already Exposed Hiring Bias

This conversation also fits into a wider genre of identity testing on professional platforms.

According to People of Color in Tech, Aliyah Jones went viral after going undercover on LinkedIn as a white woman named Emily to expose racial bias in corporate hiring. Jones wrote on Kickstarter: “I made that fake white LinkedIn profile out of frustration but also out of grief. Because no matter how qualified I was, how articulate, how buttoned up… being Black still meant being overlooked.”

The outlet reported that Jones documented the eight-month experiment across TikTok and YouTube, and that it drew widespread response. “When I shared my story online, it went viral, but it also cracked something open,” she said. According to People of Color in Tech, she created submission forms, and “within a week, more than 300 people signed up.”

She later drew a boundary around repeating the experiment. “It was a one-time, lived experiment,” she explained on LinkedIn, according to the outlet. “I would much rather not repeat trauma for clicks, views, or notoriety that was never the case.”

Now, Jones is turning the project into a feature-length documentary, Being Black in Corporate America, which the outlet describes as exploring “the emotional toll, constant code switching, and everyday resilience” of Black professionals.

What This Reveals About Power on Professional Platforms

The easiest version of this story would be a simple conclusion: women toggled “male,” and the algorithm rewarded them. Case closed.

However, there isn’t a clean ending, and the women living through it do not describe it that way either.

Some women saw huge spikes. Others did not. Variables got tangled, making causality hard to pin down. However, the most urgent problem might be the gender box itself, because systems absorb bias faster when they can label people.

Still, the collective experiment landed because it spoke to something familiar: professional credibility still reads as masculine in many spaces. The algorithm may not “use gender,” but it can still learn from what people reward, click, and amplify. Even a platform built around merit can reproduce the old hierarchy if “merit” gets measured through biased engagement.

Frankland calls for a hard reset. “Until LinkedIn removes the field entirely,” she wrote, “the most powerful thing we can do as human beings, men, women, and anyone in between genders is to refuse to feed the machine.”

And in late 2025, that may be the most unsettling part. The system does not need to say it favors men for women to feel the consequences.

It only needs to keep delivering the same outcome.