Skip navigation Jump to main navigation Jump to main navigation

Morningside Campus Access Updates

Access to the Morningside campus is restricted. Read More.
Close alert

Enterprise Risk Management Panel Explores the Growing Impact of AI on Cybersecurity

Hedge funds experienced a sudden increase in complex and intense cyberattacks in 2023. According to Agio, companies that brought their cyber programs in-house saw significant increase: 77% noted a rise in attack frequency, while 87% indicated that attacks were more severe. And 100% of these companies expressed intentions to engage third-party providers to outsource their cybersecurity programs in order to strengthen their defenses.

Now with AI, we have to advance our defense against traditional cyberattacks and evaluate evolving and more sophisticated cyber threats.

This is just one example of the impact of AI on cybersecurity discussed during the recent panel “The Impact of AI on Cybersecurity,” moderated by Shahryar Shaghaghi, professor of professional practice in the Enterprise Risk Management program at Columbia University School of Professional Studies (SPS).

While we have been living through increasing and evolving cybersecurity threats and breaches in recent decades, the rise and use of AI, while providing benefits in so many unimaginable ways, will bring with it an unexpected set of cybersecurity risks.  

There is already widespread application of AI in financial services, particularly within hedge funds. This encompasses front-office tasks like trading decisions and back-office functions such as cybersecurity and data management. In front-office applications, particularly in trading decision-making, the use of machine learning, a subset of AI, is rapidly increasing. AI could help with trading decision-making, risk management, and cybersecurity.

“There’s a wide range of models that could be used in each of these categories, whether it’s about making smarter trading decisions, having a more robust risk management protocol in place, and all the way down to now,” said Soheil Gityforoze, Ph.D. candidate in AI and Machine Learning at George Washington University. “I think it’s an exciting moment for this field, with a lot happening every day. New models and technologies are emerging constantly.”

Daniel Wallace, associate partner at McKinsey, shared an interesting example of how one of his clients in the electric utility sector is leveraging AI. The client needs numerous maintenance crews to fix power-line issues. These crews used to carry around thick three-ring binders filled with hundreds of pages of instructions and manuals. When there was a power line down, they had to open up these binders, flip to the relevant section, and figure out the procedures to follow.

They then transitioned all those manuals into a generative AI module.

“Now someone in the field can simply open up their iPad, describe their situation—working in a marshland with downed power lines during a storm—and receive custom step-by-step instructions tailored to their needs,” Wallace said.

However, governments have not yet established formal guardrails and regulations. The cybersecurity landscape lacks a standardized methodology, with various frameworks in existence. This underscores the ongoing growth and development within both the cybersecurity and the AI industries.

“We’re just building as we go,” said Demond Waters, chief information security officer at the New York City Department of Education. “We’re looking at AI policy to work with students and educators to explore use cases.”

This primarily concerns safeguarding the privacy of personal data, he added. Once data is inputted into a language model, it’s crucial to ensure it isn’t utilized for unintended purposes or applications. “We’re also collaborating with Google, as well as Amazon, on different language models and implementing guardrails in some of our systems,” Waters said.

The harm isn’t limited to privacy issues. “With generative AI and AI impact, I really believe the attack surface will change and it’s going to get worse,” said Kambiz Mofrad, chief information security officer at Svam International.

At the moment, however, the attack surface has not changed significantly, despite the availability of more sophisticated tools for targeting companies.

Waters agreed but also pointed out that there are more frequent and diverse attacks than before, necessitating a shift in tactics for threat detection, leveraging AI to identify and respond to evolving threats.

Cyber hygiene will be more important than ever. Breaches will persist if people overlook fundamental security measures, such as fixing vulnerabilities, ensuring proper coding practices, and prioritizing security over speed.

“We keep adopting new tools like AI,” Waters said, “but it’s like putting a new sock on a dirty foot.”


About the Program

The Master of Science in Enterprise Risk Management (ERM) program at Columbia University prepares graduates to inform better risk-reward decisions by providing a complete, robust, and integrated picture of both upside and downside volatility across an entire enterprise.