How AI is playing cupid and breaking hearts

Virtual partners pose privacy pitfalls for unsuspecting users

As dating app usage goes up during the month of love, experts warn against falling for high-tech scammers.
Online scams are getting more sophisticated, especially with the introduction of mass-produced flirt bots. (123RF/Igor Sapozhkov)

Valentine’s Day is around the corner, and not everyone is looking for a human companion. Instead, some are embracing machines over living people. Designed with an affirming tone in mind, AI companion chatbots can blur emotional boundaries and foster romantic attachment. While this may seem harmless, it raises concerns both for individuals and organisations seeking to prevent emotional dependency, manipulation and data leakage.

“Unlike general-purpose chatbots, AI companion apps like Replika and Character.AI go a step further by offering custom characters — from friends and romantic partners to fantasy personas — designed to feel distinctly human,” says Anna Collard, SVP of content strategy & CISO advisor at KnowBe4 Africa.

Growth in the AI companion app sector has been rapid: 60-million new downloads were recorded in the first half of 2025 alone, an 88% year-on-year rise. The market now includes 337 revenue-generating apps worldwide, with more than a third launched last year.

Collard says that many users are duped into feeling they are safe sharing intimate conversations with a machine — the so-called ELIZA effect.

Picture: ISTOCK
AI companions can create the perception of a perfect psychological environment for a user to lower their guard.

“AI companions can create the perception of a perfect psychological environment for a user to lower their guard. They are non-judgmental, always available, and programmed to be supportive. For an employee dealing with high stress or isolation, the bot becomes a trusted confidant. It’s just part of our psychology to anthropomorphise machines or bots, particularly when they appear human-like.”

This psychological bond creates a security vulnerability. When users perceive an AI bot as a “friend” or “partner”, they are far more likely to share sensitive information — ranging from personal grievances and health concerns to proprietary corporate data.

An immediate threat to organisations is the leakage of sensitive information. Because these bots are often developed by smaller, niche startups with questionable data protection standards, the information shared with a bot is rarely private. Case in point is the recent example of an AI toy exposing 50,000 logs of its chats with children. This saw anyone with a Gmail account capable of viewing the kids’ private conversations.

The privacy policies of these apps are often opaque. In some cases, chat logs are used to further train the models or are stored in insecure databases. “Caution is definitely required,” Collard comments. “What feels like a private, low-stakes interaction could contain sensitive information, strategy, financial pressures, personal stressors or contextual details that adversaries could weaponise.”

Could we see hackers targeting lonely individuals with mass-produced flirt bots? Collard believes it’s already happening.

Tax season has become an easy way for scammers to strike. Stock photo.
While using chatbots might seem harmless, scammers can use them to their advantage. (123RF/lightfieldstudios)

“Social engineering has always been scaled by exploiting emotion, urgency, fear, curiosity, love and attraction,” she comments. “AI simply automates that at scale. What worries me most isn’t the technology itself, but how it empowers those who have a malicious intent to convincingly mirror human intimacy, for example, systematic romance scammers.”

According to Collard, in a matter of years, scams have evolved from the “Dear Sir/Madam” type to emotionally intelligent manipulation. “And it’s not the bots themselves that are the issue; it’s the intentional use of them by scammers,” she says.

She mentions the example of an illegal LoveGPT bot that helps scammers to say the right psychologically triggering things to create dependency and activate emotions in their victims. “All the scammers need to do is copy and paste or even just automate the conversations,” she states.

“Ultimately, no chatbot, no matter how attentive or emotionally fluent, can replace genuine human connection,” she emphasises.

If an interaction with a chatbot begins to feel emotionally substitutive, secretive or hard to step away from, she believes that’s a signal to pause and reach out to a trusted person or professional. “Technology may be part of modern life, but that means we need to strengthen our digital mindfulness skills to learn how to recognise manipulation or induced dependency.

“Lastly, when it comes to loneliness, vulnerability and love, the safest defence remains resolutely human,” she concludes.


Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon

Related Articles