Automated detection of harmful content

Last updated: 1 year ago
LinkedIn is committed to maintaining a safe, trusted, and professional environment for our members. Sending a message that violates the LinkedIn User Agreement or Professional Community Policies, including, but not limited to harassment, money fraud, or phishing, is prohibited on LinkedIn.

In an effort to detect and prevent the sharing of harmful content in messages, LinkedIn provides members with an optional advanced safety feature. This feature, when enabled, allows LinkedIn’s automated machine learning models to detect likely harmful content within messages. While we always work to protect members by proactively identifying malware and viruses, these advanced models serve to protect against a wider range of policy violations. These violations include, but are not limited to, sexual harassment in the form of text, images and video, and other activities like the intention to move conversations off LinkedIn.

What happens if harmful content is detected?
If the automated systems detect likely harmful content in a message from a sender with whom you have not had prior messaging communication, the message will be sent directly to your spam messages folder.
If the automated systems detect likely harmful content from a sender with whom you have had a previous messaging communication, the message containing the content will be hidden by a warning. The warning can be dismissed, giving you the ability to view and report the message if desired.
snapshot
If you report another member's message, they won't be notified who reported them, and you should no longer see the conversation that you reported in your messaging inbox. LinkedIn may review the reported conversation to take additional measures, like warning or suspending the author if the content is in violation of our User Agreement or Professional Community Policies. In some cases, you’ll receive more information on the outcome via email.
How can I control when LinkedIn applies its automated systems to my incoming messages to detect harmful content?
You can manage the setting directly from the Communications page. You can turn the setting on or off at any time. Note, having the setting off limits LinkedIn’s ability to shield message abuse from your inbox.
To turn Automated Detection of Harmful Content on or off:
  1. Click the Me icon at the top of your LinkedIn homepage.
  2. Select Settings & Privacy from the dropdown.
  3. Click Data privacy on the left side of the page.

  4. Click Automated detection of harmful content under the Messaging Experience section.

  5. Use the toggle to turn this feature on or off.
Related articles: