
Connecting Canada
Building responsible AI together: How TELUS’ Purple Teaming brings everyone to the table
Oct 9, 2025
When it comes to building responsible AI, there’s power in collaboration. That’s the philosophy behind Purple Teaming, TELUS’ innovative approach to testing AI systems.
In traditional cybersecurity, testing resembles a competitive sport, with one security team attacking a system and the other defending it. The "red team" are the attackers, comprised of security experts who simulate real hacker behavior. These professionals — often including penetration testers, ethical hackers, and network specialists — probe for weaknesses by trying to break into networks, exploit software bugs, or gain unauthorized access to databases. The "blue team" consists of defensive security professionals who monitor system responses and patch vulnerabilities as they're discovered. These teams work separately, often in silos.
In AI security, a similar playbook applies, but with tactics tailored to artificial intelligence systems. AI red teams probe models for weaknesses by crafting malicious prompts, attempting to "jailbreak" safety guidelines, or feeding the system misleading information designed to trigger harmful outputs. Meanwhile, AI blue teams monitor model responses, implement safety filters, and patch vulnerabilities when problematic behaviors are discovered. Like their cybersecurity counterparts, these groups traditionally work separately on their own distinct teams.
“Purple teaming” flips this red-versus-blue script entirely. Just as mixing red and blue paint creates purple, Purple Teaming blends red and blue teams together into a single unified "purple" team. Combining offensive and defensive approaches into one collaborative unit creates greater agility and enables faster iterations—essential capabilities for keeping pace with rapidly advancing technology like AI.
But merging the teams who develop and test AI is just the beginning. What makes Purple Teaming truly unique is its spirit of collaboration: bringing together people with a diversity of perspectives and expertise to test a product to ensure everyone has a voice in building technology that shapes our future.
Everyone has a voice
TELUS invites team members from across the organization to join Purple Teaming sessions, regardless of background or expertise. Traditional AI testing relies on security professionals who approach systems with specialized expertise that everyday users don’t have. While Purple Teaming still includes the experts required for testing – AI architects, privacy experts, data ethicists, developers, and security specialists – it also actively involves people from a wide range of non-technical backgrounds in the testing processes.
Since AI impacts all of society, we need people from all walks of life testing it. Different people interact with AI systems in fundamentally different ways, bringing unique views that can reveal weaknesses or biases that could be missed by others.
Recent TELUS research
found that women are more likely to be concerned about safety implications of AI, and people who experience bias are more attuned to the respect and fairness of an AI system. Non-technical users approach systems differently than their creators, often discovering unexpected use cases that uncover new vulnerabilities. Purple Teaming in action
Before sessions begin, participants receive training on AI fundamentals and potential risks, ensuring everyone can contribute meaningfully regardless of technical background.
During sessions, participants may try to test a chatbot with regional slang to see how it responds, or feed it off-topic questions to see if they can push it off-script. Someone might roleplay as a distressed customer, prompting unexpected system reactions.
Each unexpected outcome sparks discussion, whether a minor quirk or a potential risk. The group uncovers blind spots that technical teams alone might miss, turning discoveries into concrete fixes and stronger safeguards.
Beyond finding vulnerabilities, these hands-on sessions help participants understand how AI works, how it can fail, and why responsible AI development matters. This creates a ripple effect of AI literacy and awareness throughout TELUS, building a more informed workforce that can better navigate an AI-enabled future.
Recognition and impact
TELUS' Purple Teaming approach earned the InfoGov World AI Excellence Award, recognizing its innovative impact on AI safety and security. More broadly, TELUS was also awarded the Responsible AI Institute Outstanding Organization prize, further validating the company's comprehensive commitment to responsible AI practices and Purple Teaming methodologies.
TELUS is committed to sharing this knowledge with the broader AI community around the world through workshops at venues such as
ALL IN
, SXSW
, the UN AI for Good Summit
, and events like TrustWeek
, spreading inclusive testing practices across the industry. The collaborative advantage
When we bring everyone into the room, we don't just find more vulnerabilities faster – we find better solutions.
Purple Teaming proves that in a world where AI touches everyone, the most important innovation might be the simplest: making sure everyone has a voice in building the technology that shapes our future.
To learn more about TELUS' approach to responsible AI and data ethics, visit
telus.com/trust
or download the latest TELUS AI Report at telus.com/responsibleai
. 
