Cisco is taking a radical strategy to AI safety in its new AI Defense answer.
In an unique interview Sunday with Rowan Cheung of The Rundown AI, Cisco Executive Vice President and CPO Jeetu Patel stated that AI Defense is “taking a radical approach to address the challenges that existing security solutions are not equipped to handle.”
AI Defense, introduced final week, goals to deal with dangers in growing and deploying AI functions, in addition to figuring out the place AI is utilized in a corporation.
AI Defense can defend AI techniques from assaults and safeguard mannequin habits throughout platforms with options equivalent to:
Detection of shadow and sanctioned AI functions throughout private and non-private clouds;
Automated testing of AI fashions for lots of of potential security and safety points; and
Continuous validation safeguards in opposition to potential security and safety threats, equivalent to immediate injection, denial of service, and delicate information leakage.
The answer additionally permits safety groups to raised defend their organizations’ information by offering a complete view of AI apps utilized by workers, create insurance policies that prohibit entry to unsanctioned AI instruments, and implement safeguards in opposition to threats and confidential information loss whereas making certain compliance.
“The adoption of AI exposes companies to new risks that traditional cybersecurity solutions don’t address,” Kent Noyes, world head of AI and cyber innovation at know-how companies firm World Wide Technology in St. Louis, stated in an announcement. “Cisco AI Defense represents a significant leap forward in AI security, providing full visibility of an enterprise’s AI assets and protection against evolving threats.”
Positive Step for AI Security
MJ Kaufmann, an creator and teacher at O’Reilly Media, operator of a studying platform for know-how professionals, in Boston, affirmed Cisco’s evaluation of present cybersecurity options. “Cisco is right,” she informed TechNewsWorld. “Existing tools fail to address many operationally driven attacks against AI systems, such as prompt injection attacks, data leakage, and unauthorized model action.”
“Implementers must take action and implement targeted solutions to address them,” she added.
Cisco is in a novel place to offer this type of answer, famous Jack E. Gold, founder and principal analyst at J.Gold Associates, an IT advisory firm in Northborough, Mass. “That’s because they have a lot of data from their networking telemetry that can be used to reinforce the AI capabilities they want to protect,” he informed TechNewsWorld.
Cisco additionally needs to offer safety throughout platforms — on-premises, cloud, and multi-cloud — and throughout fashions, he added.
“It’ll be interesting to see how many companies adopt this,” he stated. “Cisco is certainly moving in the right direction with this kind of capability because companies, generally speaking, aren’t looking at this very effectively.”
Providing multi-model, multi-cloud safety is essential for AI safety.
“Multi-model, multi-cloud AI solutions expand an organization’s attack surface by introducing complexity across disparate environments with inconsistent security protocols, multiple data transfer points, and challenges in coordinating monitoring and incident response — factors that threat actors can more easily exploit,” Patricia Thaine, CEO and co-founder of Private AI, a knowledge safety and privateness firm in Toronto, informed TechNewsWorld.
Concerning Limitations
Although Cisco’s strategy of embedding safety controls on the community layer by way of their present infrastructure mesh exhibits promise, it additionally reveals regarding limitations, maintained Dev Nag, CEO and founding father of QueryPal, a buyer assist chatbot primarily based in San Francisco.
“While network-level visibility provides valuable telemetry, many AI-specific attacks occur at the application and model layers that network monitoring alone cannot detect,” he informed TechNewsWorld.
“The acquisition of Robust Intelligence last year gives Cisco important capabilities around model validation and runtime protection, but their focus on network integration may lead to gaps in securing the actual AI development lifecycle,” he stated. “Critical areas like training pipeline security, model supply chain verification, and fine-tuning guardrails require deep integration with MLOps tooling that goes beyond Cisco’s traditional network-centric paradigm.”
“Think about the headaches we’ve seen with open-source supply chain attacks where the offending code is openly visible,” he added. “Model supply chain attacks are almost impossible to detect by comparison.”
Nag famous that from an implementation perspective, Cisco AI Defense seems to be primarily a repackaging of present safety merchandise with some AI-specific monitoring capabilities layered on high.
“While their extensive deployment footprint provides advantages for enterprise-wide visibility, the solution feels more reactive than transformative for now,” he maintained. “For some organizations beginning their AI journey that are already working with Cisco security products, Cisco AI Defense may provide useful controls, but those pursuing advanced AI capabilities will likely need more sophisticated security architectures purpose-built for machine learning systems.”
For many organizations, mitigating AI dangers requires human penetration testers who perceive how you can ask the fashions questions that elicit delicate info, added Karen Walsh, CEO of Allegro Solutions, a cybersecurity consulting firm in West Hartford, Conn.
“Cisco’s release suggests that their ability to create model-specific guardrails will mitigate these risks to keep the AI from learning on bad data, responding to malicious requests, and sharing unintended information,” she informed TechNewsWorld. “At the very least, we could hope that this would identify and mitigate baseline issues so that pen testers could focus on more sophisticated AI compromise strategies.”
Critical Need within the Path to AGI
Kevin Okemwa, writing for Windows Central, notes that the launch of AI Defense couldn’t come at a greater time as the most important AI labs are closing in on producing true synthetic normal intelligence (AGI), which is meant to duplicate human intelligence.
“As AGI gets closer with each passing year, the stakes couldn’t be higher,” stated James McQuiggan, a safety consciousness advocate at KnowBe4, a safety consciousness coaching supplier in Clearwater, Fla.
“AGI’s ability to think like a human with intuition and orientation can revolutionize industries, but it also introduces risks that could have far-reaching consequences,” he informed TechNewsWorld. “A robust AI security solution ensures that AGI evolves responsibly, minimizing risks like rogue decision-making or unintended consequences.”
“AI security isn’t just a ‘nice-to-have’ or something to think about in the years to come,” he added. “It’s critical as we move toward AGI.”
Existential Doom?
Okemwa additionally wrote: “While AI Defense is a step in the right direction, its adoption across organizations and major AI labs remains to be seen. Interestingly, the OpenAI CEO [Sam Altman] acknowledges the technology’s threat to humanity but believes AI will be smart enough to prevent AI from causing existential doom.”
“I see some optimism about AI’s ability to self-regulate and prevent catastrophic outcomes, but I also notice in the adoption that aligning advanced AI systems with human values is still an afterthought rather than an imperative,” Adam Ennamli, chief threat and safety officer on the General Bank of Canada informed TechNewsWorld.
“The notion that AI will solve its own existential risks is dangerously optimistic, as demonstrated by current AI systems that can already be manipulated to create harmful content and bypass security controls,” added Stephen Kowski, subject CTO at SlashNext, a pc and community safety firm, in Pleasanton, Calif.
“Technical safeguards and human oversight remain essential since AI systems are fundamentally driven by their training data and programmed objectives, not an inherent desire for human well-being,” he informed TechNewsWorld.
“Human beings are pretty creative,” Gold added. “I don’t buy into this whole doomsday nonsense. We’ll figure out a way to make AI work for us and do it safely. That’s not to say there won’t be issues along the way, but we’re not all going to end up in ‘The Matrix’.”