Managers at most big utility plants, refineries, and factories lack basic empirical data about the risks facing their industrial control systems and operational technology (OT). This limitation is due to a lack of technical data on OT cyber incidents overall, along with an inability to apply traditional actuarial methods to estimate the potential financial consequences of cyber events.

In the era before widespread artificial intelligence and machine learning, security experts had to dig into the data themselves to find correlations. Machine learning (ML) has provided a big boost by identifying anomalies that are rare or abnormal, and artificial intelligence (AI) has taken it even further by applying logic to discern patterns.

There is too much information out there today for humans to manually monitor all the connections between cyber-physical systems, networks, vulnerabilities, and more. AI-based systems are needed to identify and automate data processing from interconnected systems, continually analyzing the data to deliver updates. As machine learning engines ingest massive volumes of data, AI platforms can deliver greater levels of speed and accuracy to assess risk. AI and ML strategies include, among others, vulnerability detection, accelerated processing of complex security data relationships, and the impacts of cyber incidents on a network of interconnected critical infrastructure.

AI and ML can help humans accelerate threat detection and response by spotting when attackers are probing outside the perimeter of an organization during off hours, for instance. Such continual security monitoring protects against the constant barrage of potentially harmful emails, ads, and malicious links that are shared over countless unsecured devices around the clock.

Recommended CyberTech Insights: How 47-Day Certificate Lifecycles Will Transform Digital Security

To progress beyond detection and response, researchers can start anticipating problems by partnering for external sources of data and telemetry. In this way, researchers can assess the different telemetries of their organizational performance compared with industry peers, including their protections for network architecture, hardening, and vulnerability management. This helps security teams estimate how they will fare against the industry mean and industry average. Such knowledge creates an ability to build baselines by determining what is a normal data set for that organization.

Modeling Risk Maturity with Digital Twin Simulations

Organization-wide risk management should be applied at three levels, according to the Guide to Operational Technology Security by the National Institute of Standards and Technology (NIST). Level 1 addresses risk management from the organizational perspective and provides context for all risk management activities within an organization. Level 2 addresses risk from a mission and business process perspective, based on Level 1 risk context, decisions, and activities. Level 3 addresses risk at the system level, based on the Level 1 and 2 activities and outputs.

The NIST Risk Management Framework (RMF) applies risk management processes and concepts to systems and organizations for framing risk, assessing risk, responding to risk, and monitoring risk. Of course, most organizations want to know if they are above average or below average in terms of their risk management maturity. Too many leaders assume that they have high maturity levels, when in fact their performance lags the industry average.

One effective way to model and improve maturity is by building simplified digital twins of the security program posture with all its infrastructure, endpoints and vulnerabilities. In this way, digital twins can run attack path simulations, and those models can evolve to become even more realistic over time as the system ingests more real-world data. Millions of Monte Carlo simulations can be performed weekly on each of several hundred monitored sites to deliver dynamic insights about cyber risk. As the external threat landscape is constantly changing, or cyber risk mitigation projects are evaluated, simulation helps estimate losses an organization could expect.

Machine learning algorithms are also applied to normalize and categorize ingested data from dozens of other sources. This includes internal data and raw security signals from cybersecurity partner solutions for intrusion detection and vulnerability management systems. In addition, natural language processing helps analyze text that contains cyber threat information.

Recommended CyberTech Insights: New Year, New Cybersecurity Budgets: What’s Worth the Spend?

Getting a Handle on the True Cost of Cyber Risk

Cyber risk management involves practices for secure risk transfer by applying artificial intelligence and data-driven tools to manage, mitigate, and eventually transfer cyber risks to insurers. The goal is to model the probability of different outcomes across a variety of processes that cannot be easily predicted.

Simulations run what-if analyses on suggested mitigation projects to identify which ones will have the greatest positive impact on risk reduction. Machine learning can also model complex dependencies for risk aggregation in a bottom-up approach based on threat impact and frequency.

AI can bring the huge benefits of automation to every security project and task. In turn, the under-staffed and over-worked cybersecurity workforce becomes less focused on performing repetitive, manual processes, and more responsible for designing and operating the processes that automate tasks.

CISOs and CFOs should collaborate to develop their best shared risk coverage, with clear approaches for risk transfer and strategies for risk mitigation. By applying new AI-based software platforms, security teams can predict where and how these threats are likely to occur in a client’s unique situation. These software solutions integrate advanced simulation, AI, and inside-out data, translating all that information into dollars at risk. It quantifies financial losses from potential cyber incidents and calculates  the return on risk mitigation approaches.

Risk quantification platforms apply AI to probabilistic models including Bayesian networks and graphical models. These techniques allow analysts to explicitly represent uncertainty by assigning probabilities to different outcomes, which helps the AI system make informed decisions based on uncertain data. As a result, some of the key financial metrics delivered to CISOs and insurance underwriters include value at risk, probability of loss, financial impact of cyber incidents, top drivers of risk, top drivers of loss, and expected risk reduction delivered by a sustained cyber risk management program. Providing such clear financial metrics can enable organizations to more fully understand the extent of their risk profile.

Recommended CyberTech Insights: Beyond the Bottom Line: How CFOs are Fueling Innovation and Growth

To participate in our interviews, please write to our CyberTech Media Room at news@intentamplify.com