In this regard, AI is like Google’s search engine, which filters through large amounts of data in a way that humans can’t. Its true potential will only be realized when autonomously learning machines can draw new conclusions based on existing patterns. In reality, AI’s much-touted ability to determine what is “normal” and then identify outliers that might indicate suspicious patterns doesn’t always work.
Networks are incredibly complex, and the larger they are, the more difficult it is to compare them. Add to this the fact that commercial networks are constantly changing, growing new models of behavior and interaction, which further complicates their structure. As for AI systems, even the simple regular evolution of a “healthy” network looks “suspicious” to them, which leads to a huge number of false positives.
Cybercriminals also have a few tricks up their sleeve. They australia mobile database fool “smart” systems without going beyond the most normal behavior. On the other hand, well-documented adversarial machine learning (a machine learning technique designed to fool models by feeding them misleading data) methods that create the digital equivalent of optical illusions can force AI to make bad decisions.
So what next? Can AI systems be improved to be more effective in cybersecurity? The biggest challenge in this area is transparency, i.e. the ability of a system to explain its findings. Ironically, AI systems that can explain how they arrived at a decision are less effective than mysterious “black boxes.” Users distrust the results of these opaque systems that tell them “something unusual happened” without explaining why it’s worth paying attention to or why it matters to the business.