Connect with us

Artificial Intelligence

AI-Powered Defense: How Generative AI Is Neutralizing Cyber Threats

Published

on

Shaahid Chitarah Amod, the Product Manager of OT/XIOT Business Unit for Levant and Egypt Africa Region at CyberKnight; and Muhammad Abdulla Marakkoottathil, the Senior Product Manager for Networking Business Unit, Gulf Region at CyberKnight, say generative AI is playing a significant role in enhancing cybersecurity measures today

How is generative AI being utilised to enhance cybersecurity measures today?
Shaahid Chitarah Amod: Generative AI in cybersecurity helps security teams stay ahead by simulating attacks, creating synthetic malware, and predicting threats. It boosts detection and response, making defenses smarter and more proactive. Generative AI is meant to take the guess work out of trouble shooting and work at a fast more autonomous pace.

Muhammad Abdulla Marakkoottathil: Generative AI is playing a significant role in enhancing cybersecurity measures today. Here are some key ways it’s being utilised:

  1. Threat Detection and Response: Generative AI models, such as Generative Adversarial Networks (GANs), are used to simulate potential cyber threats and identify vulnerabilities in systems before they can be exploited. This proactive approach helps in strengthening defenses.
  2. Automated Incident Response: AI can automate responses to security incidents, reducing the time it takes to mitigate threats. For example, AI can isolate affected systems, block malicious IP addresses, and even roll back changes made by malware.
  3. Phishing Detection: Generative AI can analyse vast amounts of data to detect phishing attempts. By understanding the patterns and techniques used in phishing emails, AI can flag suspicious messages with high accuracy
  4. Malware Analysis: AI models can analyse and classify malware more efficiently than traditional methods. They can identify new strains of malware by recognising patterns and behaviors that are indicative of malicious activity
  5. Enhancing Threat Intelligence: AI can process and analyse threat intelligence data from various sources, providing security teams with actionable insights. This helps in predicting and preventing potential attacks
  6. Improving Security Protocols: AI can continuously monitor and improve security protocols by learning from past incidents and adapting to new threats. This dynamic approach ensures that security measures are always up-to-date

Generative AI is indeed a double-edged sword, as it can also be used by malicious actors to create sophisticated threats. However, its potential to enhance cybersecurity measures is immense and continues to evolve rapidly.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Shaahid Chitarah Amod: Just as an IT Team would use Gen AI improves security, it also introduces new risks, like advanced malware, phishing scams, deepfakes, and bypassing security defenses. This could lead to more AI-driven attacks and data breaches. For example, attackers can use AI to generate advanced phishing emails and deepfake content, by using the AI engine to social engineer a persona.

Muhammad Abdulla Marakkoottathil: Generative AI introduces several potential risks in the cybersecurity landscape, particularly through AI-driven cyberattacks. Here are some key risks:

  1. Advanced Phishing and Social Engineering: Generative AI can create highly realistic and personalised phishing emails, making it difficult for individuals to distinguish between genuine and fraudulent communications. This increases the likelihood of successful phishing attacks.
  2. Deepfakes and Identity Impersonation: AI can generate convincing deepfake videos, audio, and images, which can be used to impersonate individuals, including high-profile targets. This can lead to identity theft, financial fraud, and reputational damage
  3. Evolving Malware: AI can be used to develop more sophisticated malware that can adapt and evolve to avoid detection by traditional security measures. This makes it harder for cybersecurity systems to keep up with new threats.
  4. Automated Attacks: AI can automate various stages of a cyberattack, from reconnaissance to execution. This increases the speed and scale at which attacks can be carried out, overwhelming traditional defense mechanisms.
  5. Data Privacy and Unauthorised Access: Generative AI systems can inadvertently expose sensitive data if not properly managed. Additionally, AI-driven attacks can exploit vulnerabilities to gain unauthorised access to data.
  6. AI-Powered Exploits: AI can identify and exploit vulnerabilities in software and systems more efficiently than human hackers. This can lead to more frequent and severe security breaches.
  7. Misinformation and Disinformation: Generative AI can be used to create and spread false information, which can have serious implications for public trust and security.

To mitigate these risks, organisations need to adopt advanced cybersecurity measures, including AI-driven defenses, continuous monitoring, and robust incident response strategies.

How can organisations leverage generative AI for proactive threat detection and response?
Shaahid Chitarah Amod: AI-driven threat detection leverages machine learning and deep learning algorithms to identify suspicious activities and potential security threats with greater accuracy. By detecting anomalies and correlating data across these inputs, AI enhances threat visibility, reduces false positives, and enables faster, more proactive security responses. Organisations can use AI in conjunction with Machine learning for:

  1. Threat Hunting: AI models analyse past attacks to predict future threats.
  2. Automated Security Playbooks: AI can recommend responses based on attack patterns.
  3. SOC Automation: AI assists Security Operations Centers by reducing manual workload.
  4. Vulnerability Management: AI scans software and suggests patches based on exploit trends.

Muhammad Abdulla Marakkoottathil: Organisations can leverage generative AI for proactive threat detection and response in several impactful ways:

  1. Simulating Cyberattacks: Generative AI can create realistic simulations of cyberattacks, allowing organisations to identify vulnerabilities and test their defenses. This helps in preparing for potential threats and improving overall security posture.
  2. Automated Threat Hunting: AI can continuously scan networks and systems for signs of malicious activity. By identifying patterns and anomalies, AI can detect threats that might go unnoticed by traditional methods.
  3. Enhanced Threat Intelligence: Generative AI can analyse vast amounts of data from various sources to provide actionable threat intelligence. This helps security teams stay ahead of emerging threats and make informed decisions.
  4. Dynamic Incident Response: AI can automate and coordinate responses to security incidents, reducing the time it takes to contain and mitigate threats. This includes isolating affected systems, blocking malicious IP addresses, and rolling back changes made by malware.
  5. Behavioral Analysis: AI can monitor user behavior to detect deviations that may indicate a security threat. By establishing a baseline of normal behavior, AI can identify and respond to suspicious activities in real-time.
  6. Predictive Analytics: AI can predict potential security breaches by analysing historical data and identifying trends. This allows organisations to take preventive measures before an attack occurs.
  7. Deception Technology: AI can create decoys and traps to lure attackers away from valuable assets. By interacting with these decoys, attackers reveal their tactics, which can then be analysed to improve defenses.

By integrating generative AI into their cybersecurity strategies, organisations can enhance their ability to detect, respond to, and prevent cyber threats more effectively.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Shaahid Chitarah Amod: Key ethical concerns include:

  1. Privacy Violations: Generative AI models require large amounts of data, including potentially sensitive personal information. There’s a risk of privacy violations if AI systems are used to analyse or track user behavior for cybersecurity purposes,raising concerns about data collection, storage, and data usage.
  2. Bias in AI Models: AI decisions may be influenced by biases in training data.
  3. Accountability Issues: Determining responsibility for AI-driven security actions can be complex.

How to Address These Issues:

  1. Implement strict data governance policies.
  2. Regularly audit AI models for bias and fairness.
  3. Maintain human oversight for critical security decisions.

Muhammad Abdulla Marakkoottathil: Using generative AI in cybersecurity brings several ethical concerns, but there are ways to address them effectively. Here are some key concerns and potential solutions:

  1. Privacy Violations: Generative AI can process vast amounts of data, potentially leading to privacy breaches. To address this, organisations should implement strict data governance policies, ensuring that AI systems only access and process necessary data
  2. Bias and Discrimination: AI models can inherit biases from the data they are trained on, leading to unfair treatment of certain groups. Regular audits of AI systems and diverse training datasets can help mitigate these biases.
  3. Accountability and Transparency: When AI systems make autonomous decisions, it can be challenging to determine who is responsible for errors. Establishing clear accountability frameworks and maintaining transparency in AI decision-making processes are crucial
  4. Misinformation and Deepfakes: Generative AI can create realistic but false content, such as deepfakes, which can be used maliciously. Developing robust detection tools and promoting digital literacy can help combat the spread of misinformation
  5. Overreliance on AI: Relying too heavily on AI for cybersecurity can lead to complacency and reduced human oversight. Balancing AI automation with human expertise ensures a more resilient security posture
  6. Intellectual Property Concerns: Using copyrighted material to train AI models can lead to legal issues. Organisations should ensure they have the right to use the data and respect intellectual property laws

To address these ethical concerns, organisations can adopt the following strategies:

  1. Ethical AI Frameworks: Implementing ethical guidelines and frameworks for AI development and deployment can help ensure responsible use.
  2. Regular Audits and Monitoring: Conducting regular audits of AI systems to identify and rectify biases and other ethical issues.
  3. Transparency and Communication: Maintaining transparency in AI operations and clearly communicating AI’s role and limitations to stakeholders.
  4. Collaboration with Experts: Working with ethicists, legal experts, and cybersecurity professionals to develop and enforce ethical standards.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Shaahid Chitarah Amod: Integrating AI into cybersecurity comes with several challenges. Organisations must navigate the complexity of ensuring AI tools seamlessly work with existing security systems while also addressing the shortage of skilled professionals trained in AI-driven cybersecurity. Another concern is the potential for false positives, which can overwhelm security analysts with excessive alerts, leading to alert fatigue. Finally, organisations must ensure that AI implementation complies with cybersecurity and privacy regulations, adding another layer of complexity to its adoption. It is also a mindset, where teams are fearful to change a system which took them months to perfect through customised configuration.

Muhammad Abdulla Marakkoottathil: Integrating generative AI tools into cybersecurity workflows presents several challenges for teams. Here are some of the key issues they face:

  1. Complexity and Expertise: Implementing generative AI requires specialised knowledge and skills. Many cybersecurity teams may lack the necessary expertise to effectively deploy and manage these advanced tools
  2. Data Privacy and Security: Ensuring that AI systems handle sensitive data securely is a major concern. There is a risk of data breaches if AI models are not properly secured
  3. Bias and Accuracy: AI models can inherit biases from the data they are trained on, leading to inaccurate threat detection and response. Regular audits and updates are needed to maintain accuracy and fairness
  4. Integration with Existing Systems: Integrating AI tools with existing cybersecurity infrastructure can be challenging. Compatibility issues and the need for significant modifications to current systems can hinder smooth integration
  5. Cost and Resource Allocation: Developing and maintaining AI systems can be expensive. Organisations need to allocate sufficient resources for training, deployment, and ongoing management of AI tools
  6. Regulatory Compliance: Adhering to regulatory requirements and ensuring that AI systems comply with data protection laws is crucial. This can be complex, especially with varying regulations across different regions
  7. Ethical Concerns: Addressing ethical issues, such as ensuring transparency and accountability in AI decision-making, is essential. Organizations must establish clear guidelines and frameworks to manage these concerns

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Shaahid Chitarah Amod: Yes, some examples include: In March 2024, a pivotal vulnerability, CVE-2024-3094, was discovered within XZUtils, a component widely used in Linux. It received a severity score of 10, the highest possible. This marks one of the most substantial vulnerabilities discovered thus far in 2024. If not for its early discovery by a sharp-eyed researcher, this exploit could have made its way into mainstream Linux releases across global cloud and data center environments. Using a tool like XAGE, the AI engine would assist in faster detection of the above mention vulnerability. CrowdStrike has AI-powered solutions, including the Charlotte AI generative AI security assistant and AI-driven threat detection and response capabilities, to enhance cybersecurity and streamline analyst workflows

Muhammad Abdulla Marakkoottathil: Yes, there are several notable examples of generative AI successfully preventing or mitigating cyberattacks. CyberKnight Technologies is leveraging the power of generative AI to enhance its cybersecurity offerings and help its customers stay ahead of evolving threats. By integrating generative AI into solutions offered, CyberKnight is able to provide advanced threat detection, automated response, and comprehensive visibility across their clients’ network environments:

  1. Elastic AI Assistant for Security: It helps security teams by answering questions about Security, generating or translating natural language queries, providing context on alerts, and integrating with custom knowledge sources and it also provides context-aware guidance on alert triage, incident response, and administrative tasks, making it easier for security team to manage and respond to threats.
  2. Arista NDR: Arista NDR uses AI to autonomously detect and profile all devices within an enterprise network, including IoT and shadow IT devices. This comprehensivevisibility helps in identifying potential threats and vulnerabilities.
  3. Gigamon Deep Observability: By integrating generative AI with Gigamon’s GigaVUE Cloud Suite, the system can analyze vast amounts of network traffic data to identify unusual patterns and potential threats that traditional methods might miss
  4. IBM Watson: IBM Watson uses AI to analyse vast amounts of security data and identify potential threats. It can provide insights and recommendations for mitigating risks, helping organisations stay ahead of cyber threats
  5. VirusTotal Code Insight: VirusTotal, a subsidiary of Google, uses generative AI to analyse code snippets and produce natural language summaries. This helps in identifying malicious code and understanding its behavior, thereby preventing potential attacks

These examples demonstrate how generative AI can be a powerful tool in enhancing cybersecurity measures and protecting against sophisticated cyber threats.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Shaahid Chitarah Amod: Generative AI will likely evolve by becoming more autonomous, handling security tasks with little to no human input. Threat intelligence will evolve, with AI predicting cyber threats before they emerge, allowing for a more proactive defense. Incident response will be faster and more efficient as AI-driven automation minimises delays. AI will also play a key role in enforcing Zero Trust security models, ensuring strict access controls and continuous verification. Additionally, attack simulations will become more advanced, enabling organisations to anticipate and defend against increasingly sophisticated cyber threats.

Muhammad Abdulla Marakkoottathil: Generative AI is poised to significantly evolve in the cybersecurity domain over the next few years. Here are some key trends and predictions:

  1. Enhanced Threat Detection: Generative AI will continue to improve in identifying and predicting cyber threats. By analysing vast amounts of data, AI can detect patterns and anomalies that indicate potential attacks, allowing for quicker and more accurate threat detection
  2. Automated Incident Response: AI-driven automation will become more sophisticated, enabling faster and more effective responses to security incidents. This includes isolating affected systems, blocking malicious activities, and even rolling back changes made by malware
  3. Proactive Defense Mechanisms: AI will be used to simulate potential cyberattacks and identify vulnerabilities before they can be exploited. This proactive approach will help organisations strengthen their defenses and reduce the risk of successful attacks
  4. Integration with DevSecOps: As attackers increasingly target the software development lifecycle, AI will play a crucial role in securing development environments. This includes automated code reviews, vulnerability scanning, and ensuring compliance with security best practices
  5. AI vs. AI: The cybersecurity landscape will see a rise in AI-driven attacks, leading to an “AI vs. AI” scenario where defensive AI systems must constantly adapt to counteract evolving AI-powered threats
  6. Improved User Behavior Analysis: AI will enhance the ability to monitor and analyse user behavior, detecting anomalies that may indicate insider threats or compromised accounts. This will help in preventing data breaches and unauthorised access
  7. Ethical and Regulatory Considerations: As AI becomes more integrated into cybersecurity, there will be increased focus on ethical considerations and regulatory compliance. Organisations will need to ensure transparency, accountability, and fairness in their AI systems.

Overall, generative AI will play a crucial role in transforming cybersecurity, making it more proactive, efficient, and adaptive to emerging threats.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Shaahid Chitarah Amod: Human-in-the-loop (HITL) oversight is crucial for:

  • Validating AI Decisions: Security analysts must verify AI-generated alerts.
  • Reducing False Positives: Human review ensures AI does not mistakenly block customised activity, or system.

Threat actors will use the AI tools to drive enforce false positive, IT Experts will need continuously refine AI training data to improve accuracy. AI may not recognise entirely new attack methods, requiring human intervention. Bringing humans into the mix makes AI more adaptable, letting models evolve with real-world changes and user needs. By keeping that human touch, we make sure machine learning systems can handle the complexities and nuances that pure algorithms might miss.

Muhammad Abdulla Marakkoottathil: Human oversight, often referred to as Human-in-the-Loop (HITL), plays a crucial role in ensuring that generative AI systems effectively manage cybersecurity threats. Here are some key aspects of this role:

  1. Complex Decision-Making: While AI can automate many tasks, complex decision-making often requires human intuition and contextual understanding. Security analysts can interpret AI-generated insights and make informed decisions based on the broader business context
  2. Bias and Error Mitigation: AI systems can inherit biases from their training data, leading to inaccurate threat detection. Human oversight helps identify and correct these biases, ensuring more accurate and fair outcomes
  3. Ethical Considerations: Humans are essential for addressing ethical concerns related to AI use, such as privacy violations and transparency. They can ensure that AI systems operate within ethical guidelines and regulatory frameworks
  4. Continuous Improvement: Human feedback is vital for the continuous improvement of AI systems. By monitoring AI performance and providing feedback, humans help refine algorithms and enhance their effectiveness over time
  5. Incident Response: In the event of a security incident, human oversight ensures that responses are appropriate and proportionate. While AI can automate initial responses, humans can assess the situation and take necessary actions to mitigate the threat

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Shaahid Chitarah Amod: Smaller organisations can:

  1. Use open-source AI security tools like OpenAI’s GPT security models.
  2. Leverage cloud-based AI security services (e.g., Microsoft Defender, AWS GuardDuty).
  3. Focus on AI-driven endpoint protection to reduce costs.
  4. Partner with MSSPs (Managed Security Service Providers) for AI-driven security solutions.

Muhammad Abdulla Marakkoottathil: Smaller organisations with limited budgets can still effectively incorporate generative AI for cybersecurity by following these strategies:

  1. Leverage Affordable AI Tools: There are cost-effective AI cybersecurity tools available that provide robust protection without breaking the bank. Tools like LLM Guard for cost-effective CPU inference and Protect AI for easy customisation can be great options.
  2. Cloud-Based Solutions: Opt for cloud-based AI cybersecurity solutions, which often come with lower upfront costs and scalable pricing models. Services like CrowdStrike Falcon offer comprehensive protection with flexible pricing
  3. Open-Source AI Tools: Utilise open-source AI tools and frameworks. Projects like Snort for intrusion detection and OSSEC for host-based intrusion detection are free and can be customised to meet specific needs
  4. Collaborate with Managed Security Service Providers (MSSPs): Partnering with MSSPs can provide access to advanced AI-driven cybersecurity tools and expertise without the need for significant in-house investment
  5. Focus on Key Areas: Prioritise AI implementation in critical areas such as threat detection, phishing prevention, and automated incident response. This targeted approach ensures maximum impact with limited resources.

What best practices would you recommend for implementing generative AI tools while minimising risks?
Shaahid Chitarah Amod: Best practices include:

  1. Ensure AI Explainability: Use AI models that provide transparent decision-making insights.
  2. Implement Strong Data Governance: Protect sensitive data used in AI models.
  3. Use Multi-Layered Security: AI should complement, not replace, traditional security measures.
  4. Monitor AI Performance: Regularly audit AI-generated outputs for accuracy and fairness.
  5. Keep Human Oversight: Always have cybersecurity professionals reviewing AI actions.
  6. Training: focus on internal training for staff on AI principle to ensure all stake holders understand AI advantages and risk factors.

Muhammad Abdulla Marakkoottathil: Implementing generative AI tools while minimising risks involves several best practices. Here are some key recommendations:

  1. Data Security and Privacy: Ensure that AI systems handle data securely. Implement robust data encryption, access controls, and regular audits to protect sensitive information
  2. Bias Mitigation: Regularly audit AI models to identify and correct biases. Use diverse and representative datasets to train AI systems, and continuously monitor their outputs for fairness
  3. Transparency and Explainability: Maintain transparency in AI operations. Ensure that AI decisions are explainable and that stakeholders understand how AI systems work and make decisions
  4. Ethical Guidelines: Develop and adhere to ethical guidelines for AI use. This includes respecting user privacy, avoiding harmful applications, and ensuring that AI systems are used responsibly
  5. Human Oversight: Incorporate Human-in-the-Loop (HITL) mechanisms to ensure that critical decisions involve human judgment. This helps in mitigating risks associated with fully autonomous AI systems
  6. Regulatory Compliance: Stay updated with relevant regulations and ensure that AI systems comply with data protection and privacy laws. This includes understanding and adhering to regional and industry-specific requirements
  7. Continuous Monitoring and Improvement: Regularly monitor AI systems for performance and security issues. Implement feedback loops to continuously improve AI models and address any emerging risks
  8. Incident Response Plans: Develop and maintain robust incident response plans to quickly address any security breaches or AI system failures. This includes regular drills and updates to the response strategies

By following these best practices, organisations can effectively implement generative AI tools while minimising associated risks.

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading

Artificial Intelligence

SandboxAQ Platform Tackles AI Agent “Non-Human Identity” Threats

Published

on

SandboxAQ has announced the general availability of AQtive Guard, a platform designed to secure Non-Human Identities (NHIs) and cryptographic assets. This critical security solution arrives as organizations worldwide face increasingly sophisticated AI-driven threats capable of autonomously infiltrating networks, bypassing traditional defenses, and exploiting vulnerabilities at machine speed.

Modern enterprises are experiencing an unprecedented surge in machine-to-machine communications, with billions of AI agents now operating across corporate networks. These digital entities – ranging from legitimate automation tools to potential attack vectors – depend on cryptographic keys, digital certificates, and machine identities that frequently go unmanaged. This oversight creates massive security gaps that malicious actors can exploit, leading to potential data breaches, compliance violations, and operational disruptions.

“There will be more than one billion AI agents with significant autonomous power in the next few years,” stated Jack Hidary, CEO of SandboxAQ. “Enterprises are giving AI agents a vastly increased range of capabilities to impact customers and real-world assets. This creates a dangerous attack surface for adversaries. AQtive Guard’s Discover and Protect modules address this urgent issue.”

AQtive Guard addresses these challenges through its integrated Discover and Protect modules. The Discover component maintains continuous, real-time visibility into all NHIs and cryptographic assets including keys, certificates, and algorithms – a fundamental requirement for maintaining regulatory compliance. The Protect module then automates critical security workflows, enforcing essential policies like automated credential rotation and certificate renewal to proactively mitigate risks before they can be exploited.

At the core of AQtive Guard’s capabilities are SandboxAQ’s industry-leading Large Quantitative Models (LQMs), which provide organizations with unmatched visibility and control over their cryptographic infrastructure. This advanced technology enables enterprises to successfully navigate evolving security standards, including the latest NIST requirements, while maintaining robust protection against emerging threats.

Marc Manzano, General Manager of Cybersecurity at SandboxAQ

“As organizations accelerate AI adoption and the use of agents and machine-to-machine communication across all business domains and functions, maintaining a real-time, accurate inventory of NHIs and cryptographic assets is an essential cybersecurity practice. Being able to automatically remediate vulnerabilities and policy violations identified is crucial to decrease time to mitigation and prevent potential breaches within the first day of use of our software,” said Marc Manzano, General Manager of Cybersecurity at SandboxAQ.

SandboxAQ has significantly strengthened AQtive Guard’s capabilities through deep technical integrations with two cybersecurity industry leaders. The platform now features robust integration with CrowdStrike’s Falcon® platform, enabling direct ingestion of endpoint data for real-time vulnerability detection and immediate one-click remediation. This seamless connection allows security teams to identify and neutralize threats with unprecedented speed.

Additionally, AQtive Guard now offers full interoperability with Palo Alto Networks’ security solutions. By analyzing and incorporating firewall log data, the platform delivers enhanced network visibility, improved threat detection, and stronger compliance with enterprise security policies across hybrid environments.

AQtive Guard delivers a comprehensive, AI-powered approach to managing NHIs and cryptographic assets through four key functional areas. The platform’s advanced vulnerability detection system aggregates data from multiple sources including major cloud providers like AWS and Google Cloud, maintaining a continuously updated inventory of all cryptographic assets.

The solution’s AI-driven risk analysis engine leverages SandboxAQ’s proprietary Cyber LQMs to accurately prioritize threats while dramatically reducing false positives. This capability is enhanced by an integrated GenAI assistant that helps security teams navigate complex compliance requirements and implement appropriate remediation strategies.

For operational efficiency, AQtive Guard automates the entire lifecycle management of cryptographic assets, including issuance, rotation, and revocation processes. This automation significantly reduces manual errors while eliminating the risks associated with stale or compromised credentials. The platform also provides robust compliance support with pre-configured rulesets for major regulatory standards, customizable query capabilities, and comprehensive reporting features. These tools help organizations accelerate their transition to new NIST standards while maintaining continuous compliance with evolving requirements.

Available now as a fully managed, cloud-native solution, AQtive Guard is designed for rapid deployment and immediate impact. Enterprises can register for priority access to begin early adoption and conduct comprehensive risk assessments of their cryptographic infrastructure.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.