Artificial intelligence / March 19, 2026

Is seeing really believing anymore? The rise of AI video generation apps.

Amanda Lee

Amanda Lee

Senior Program Manager, Tech for Good & TELUS Wise®

AI videos iStock-1312018675

Cats skateboarding. Profane Price is Right contestants. Albert Einstein as a UFC fighter. Ronald McDonald in a car chase. A goat eating pizza. They all sound like impossible (and pretty silly) scenarios. But with the release of new AI video generation apps, these kinds of videos are flooding social feeds and asking us (again!) to rethink what is real and what is not.

The next wave of AI video

OpenAI launched Sora 2. Google threw its hat in the ring with Veo 3. META has Vibes. And there are countless other apps that allow people to produce AI videos from simple text prompts. You can make up any scenario you want, type it into the app and then see it come back to you in broadcast quality.

Much of the AI video generation talk has been around OpenAI’s Sora 2. Originally launched in October 2025 by invitation only, the app is now available to the mainstream on iOS, Android and the web.

Some of the stand out features of Sora 2 include:

  • Creators can add dialogue, sound effects and audio to 15-second videos (Pro users can create 25-second videos).
  • With “Cameos” users can upload a short clip of their face and voice and generate videos of themselves (depending on privacy settings, other users may be able to access a Cameo to use in their own videos by including @user’s name in their prompts).
  • Users can scroll a social media-style feed (think TikTok or Instagram) of AI-generated content on the app.

The backlash

Almost immediately after Sora 2’s launch, advocacy groups and experts sounded the alarm bells. Consumer advocacy group Public Citizen was one of the most vocal, writing a letter to OpenAI CEO Sam Altman urging the company to pull the app.

According to a CBC article, Public Citizen was most concerned about the spread of misinformation and privacy violations. The letter also cited the “reckless disregard” for product safety and people’s right to their own likeness. Other advocates called out the proliferation of non-consensual images and the rise of even more realistic deepfakes and AI slop.

The estates of deceased celebrities (like Martin Luther King, Michael Jackson, Bob Ross and Mister Rogers) and actors’ unions have pushed back on misuse of likeness. Studios including Disney, Universal and Warner Brothers launched a major legal action against AI video generator Midjourney for replicating Superman, Batman, Bugs Bunny and Daffy Duck and illegally training its model on the characters.

There are also questions about data privacy and security. OpenAI is transparent in its privacy policy that it collects data to train its models by default. In 2023, OpenAI did experience a hack, but the compromise only affected information about its systems, not customer or partner data. The company has also been a target of spear phishing and malware attacks dating back to 2024.

Is AI video generation safe for kids?

The short answer…not right now. Some of the risks associated with the app can be especially harmful for young people including:

  • Inappropriate/disturbing content
  • Identity manipulation and deepfakes
  • Minimal parental controls
  • Misinformation

Qustodio, a company that creates wellbeing tools to keep kids safe online, does not recommend Sora 2 for teens. Specifically, the company highlights content that can depict violent or dangerous situations in a realistic way (often with no warning). Young people have also not developed the necessary critical thinking skills to decipher misinformation and process it.

The app’s Cameo feature is especially worrisome. Without the right privacy settings, any user can include someone’s likeness in a video, which others can then edit, download or share. It’s fertile ground for cyberbullying or harassment.

If you do decide to venture into the realm of AI video generation with your kids, there are a few essential things to keep in mind to shape a safe and positive experience for them:

  • Use the parental controls available (this is only possible by linking accounts). Activate all the privacy settings possible (especially as they relate to Cameos and consent)
  • Have open and frequent conversations about critical thinking, watch videos together to spot the telltale signs of AI and talk about how to identify misinformation and find trusted sources elsewhere online.
  • Invest in an external solution that adds an extra layer of parental control when using the app. See PCMag for some great options.

With the growing use of AI video generation apps, we all need to learn to watch online content with a more critical eye. Don’t take everything at face value. Enjoy the creativity (many of these videos are genuinely funny and inventive). Set limits so you don’t get pulled into endless scrolling. And keep talking -- questioning, teaching and modeling the digital behaviours you want your kids to adopt. With awareness, education and a safe space for conversation, your kids can embrace this new world of AI confidently and securely.

Tags:
Kids & tech
Share this article with your friends:

There is more to explore

Artificial intelligence

Are AI toys safe for kids?

Learn about AI toy safety risks and how to protect your kids.

Read article

Artificial intelligence

Who is really calling? The rise of AI voice cloning scams.

Learn how to spot and stop deepfake voice fraud.

Read article

Artificial intelligence

Lesson plan: AI moderation challenge