
Artificial intelligence
Are AI toys safe for kids?
Learn about AI toy safety risks and how to protect your kids.
Read article
Amanda Lee
Senior Program Manager, Tech for Good & TELUS Wise®

When was the last time you posted a video on social media? Innocent enough, right? You just wanted family and friends to see the great hike you did, the amazing meal you ate or the birthday candles your son blew out. But with the rise of AI voice cloning and associated scams, those innocent videos provide just what bad actors need to fool the people you care about most.
AI voice cloning is also known as synthetic or deepfake audio. It happens when people use AI to recreate a person’s voice convincingly. They typically find an original voice imprint in videos on YouTube or from social media posts. Bad actors have embraced AI voice cloning to take their scams to a more personal level.
How prevalent are AI voice scams? In its Q4 2024 Global Call Threat Report, Hiya, a global leader in AI-powered voice intelligence, found that one third of survey respondents across the U.S., UK, Canada, Germany, France and Spain encountered deepfake voice fraud in 2024, and 30% fell victim to the scam.
Hiya also found that AI-generated deepfake fraud calls resulted in individual losses exceeding $6,000. In the Canadian context, the research indicated that 27% of Canadian consumers had reported experiencing deepfake fraud calls, with average reported losses totalling $1,479.
When AI voice cloning first hit the scene, the “grandparent” scam was most common. Targeting seniors, scammers would clone the voices of grandchildren and make distress calls to grandparents.
Marilyn Crawford received one such call. Early one morning, someone pretending to be a local police officer called to inform her that her grandson was in custody for stealing a car. At one point in the call, her “grandson” (the likeness of his voice at least) came on the line pleading with her for help. It was only going to cost $9,000 to release him. The scammer even sent a taxi to take her to the bank! Thankfully, the teller at the bank became suspicious upon hearing the story and saved Marilyn from the scam.
With the mainstream adoption of AI, scammers have gotten even more crafty and ambitious. In June 2025, The Canadian Anti-Fraud Centre (CAFC), in partnership with the Canadian Centre for Cybersecurity (CCCS), warned about a campaign impersonating high-profile public figures.
Scammers were leaving AI-generated voice messages mimicking the voices of senior officials and prominent public figures. The messages were urgent requests for money or sensitive information. The targets were business executives and senior public officials.
AI voice cloning is convincing. It’s meant to be. But there are some fairly obvious signs that the call you’re getting isn’t a friend or loved one on the other end of the line.
What steps can you take to protect yourself and your family? There are few strategies you can adopt to help avoid falling victim to AI voice cloning scams.
AI voice scams are becoming more frequent and convincing. Recognizing how to spot them and following some simple safety strategies can make these scam attempts more a nuisance than a hazard. As AI becomes more and more mainstream, looking out for our loved ones, staying informed and trusting our guts will remain the things that keep us safe and help us enjoy the benefits of this new technology era.

Learn about AI toy safety risks and how to protect your kids.
Read article