Skip to Content

Yoel Roth highlighted the company’s efforts to enhance user safety on dating platforms

Yoel Roth, the Vice President of Trust and Safety at Match Group, has highlighted the company’s efforts to enhance user safety on its dating platforms, such as Tinder and Hinge, by leveraging artificial intelligence (AI). Match Group is using AI to detect “off-color” or potentially inappropriate messages, particularly from male users, and to encourage behavioral change. This initiative is part of a broader strategy to foster respectful interactions and safer experiences on its apps.

Key AI-driven features include:

  • “Are You Sure?” Prompt: This feature warns users before they send messages flagged as potentially offensive. According to Roth, about 20% of users who see this prompt decide not to send the flagged message, demonstrating its effectiveness in curbing inappropriate behavior.
  • “Does This Bother You?” Feature: If a flagged message is sent despite the warning, the recipient is prompted to report or unmatch the sender if they find the message bothersome. This adds another layer of protection for users.

Match Group’s AI tools analyze patterns in reported messages and use machine learning to identify harmful language. The system continues to improve as it processes more data, enabling it to better distinguish between harmless and harmful communication. These efforts aim not only to detect and mitigate inappropriate behavior but also to “nudge” users toward more respectful interactions.

In addition to addressing inappropriate messages, Match Group employs AI to tackle other safety concerns, such as identifying fake profiles and scams. For instance, the company claims to remove over 44 fake accounts every minute using AI technology4. These measures reflect Match Group’s commitment to creating safer online dating environments while balancing automation with human oversight for nuanced decision-making.