Microsoft is doubling down on security in the age of AI, focusing on both defending AI and using AI to strengthen cybersecurity. A year after launching Security Copilot, Microsoft is pushing things further with the addition of AI agents that can independently handle complex and high-volume tasks. These agents are specifically designed to take on jobs like phishing detection, data protection, identity management, and threat analysis—tasks that typically overload human teams due to sheer volume and complexity.
One of the standout examples is the phishing triage agent, which filters out false alarms and highlights real threats. With over 30 billion phishing emails detected in 2024 alone, this kind of automation is a game-changer. It frees up security professionals to focus on tougher issues while still ensuring threats are caught early. Alongside phishing, Microsoft is rolling out agents across various areas like conditional access, vulnerability remediation, and threat intelligence—each one built to learn from feedback, adapt to different workflows, and run securely within Microsoft’s broader ecosystem.
In addition to Microsoft’s own tools, five partners are adding their own AI agents. These cover everything from privacy breach responses and network troubleshooting to streamlining SOC operations and reducing alert fatigue. This move reflects Microsoft’s commitment to an open, collaborative approach to security, where partners can build on Microsoft’s platform and enhance customer value.
The upgrades don’t stop with threat detection. Microsoft is also introducing data security investigation tools in Purview, designed to give security teams deeper insights into sensitive data exposure and streamline the mitigation process. As AI use grows rapidly, securing and managing it is becoming critical. Microsoft’s own research shows that over half of companies are seeing more incidents tied to AI, yet most still lack a solid framework to manage it.
To address this, Microsoft is rolling out AI security posture management across multiple clouds and platforms, including Google’s VertexAI and models like Meta Llama and Mistral. They’re also boosting detection capabilities for AI-specific threats and rolling out new controls to fight shadow AI usage—where employees use unauthorized AI apps, risking data leaks. Teams is getting a security upgrade too, with new phishing protections going live in April 2025.
Ultimately, Microsoft is betting big on an AI-first approach to cybersecurity. With a mix of autonomous tools, partner integrations, and a sharp focus on securing generative AI, they’re aiming to give every organization the tools it needs to stay ahead of evolving cyberthreats.