Risk Regulations in Banking and AI for Senior IT Leaders

The banking industry faces a fast-evolving risk landscape as advanced technologies intersect with heightened regulatory scrutiny. U.S. regulators now expect banks to manage technology-driven risks—cyber threats, AI model transparency, and system resilience—with the same rigor once applied after the 2008 crisis under Dodd-Frank. That legacy of governance is expanding to cover cybersecurity, operational resilience, and third-party risk in AI-enabled environments. Senior IT leaders must not only ensure compliance but also demonstrate proactive, well-governed innovation. This article explores recent U.S. regulatory shifts shaping banking technology, AI oversight, and the expectations for risk governance in a digital era.

16 min read

A person typing on a laptop with data analytics graphs displayed on the screen.
A person typing on a laptop with data analytics graphs displayed on the screen.
The Changing Regulatory Landscape for Banking Tech

Regulators have made it clear that technology risk is financial risk, and they are updating guidance accordingly. The Office of the Comptroller of the Currency (OCC), for example, formalized “heightened standards” for risk governance at large banks, reinforcing that boards of directors hold ultimate responsibility for overseeing risk management – including IT and innovation risks – while management handles day-to-day implementation.

These guidelines require banks to maintain a comprehensive risk governance framework commensurate with their size and complexity, and to define their risk appetite and controls for all major risk domains. In practice, this means that even cutting-edge initiatives (cloud migrations, fintech partnerships, AI deployment, etc.) must align with the bank’s established risk limits and governance processes. Regulators’ expectation is that innovation should not outpace oversight: new technologies can be adopted, but only with “safety and soundness” maintained through strong internal controls and board scrutiny.

Federal agencies are likewise revisiting existing rules through a tech lens. The Federal Reserve, FDIC, and OCC have exercised authority under Section 165 of Dodd-Frank to impose enhanced prudential standards on large institutions – and many of those standards now explicitly encompass operational and cyber risks.

For instance, the Federal Reserve’s Regulation YY (enhanced prudential standards) and supervisory guidance on risk management oblige large bank holding companies to have independent risk officers and committees that assess not just credit and market risks, but also operational resilience and IT risk exposures. This integrated approach is further reinforced by interagency efforts in recent years to address the digital transformation of finance.

Regulators increasingly acknowledge that major technology failures or cyber incidents at a bank can have systemic impacts, much like a capital shortfall. As a result, supervisory examinations and regulatory policies are placing greater weight on how institutions govern technology and model risk. In short, U.S. bank regulators are adapting old frameworks and introducing new ones to ensure financial stability and consumer protection keep pace with technology.

The sections below delve into four key areas – cybersecurity, operational resilience, AI and model risk, and third-party risk – where recent regulatory developments are impacting banking technology and AI systems.

Strengthening Cybersecurity in Financial Services

Cybersecurity has been a top priority for regulators for over a decade, but recent rules signal a move toward even stricter requirements and enforcement. A landmark development at the state level is New York’s updated cybersecurity regulation (23 NYCRR Part 500) for financial institutions. In November 2023, the New York Department of Financial Services (NYDFS) finalized major amendments to Part 500 – the first significant overhaul since the rule was introduced in 2017.

These revisions respond to the rise in sophisticated cyber threats and set a higher bar for banks and insurers operating in New York. Among the notable changes are new obligations that may demand substantial investment in cybersecurity programs, an expectation of more aggressive regulatory enforcement, and even the creation of a new “Class A” tier of large institutions subject to extra controls.

For example, under the amended rule, large financial firms must undergo independent cybersecurity audits, implement advanced monitoring of privileged account access, deploy endpoint detection and response tools, and have their boards or senior officers approve the cybersecurity policy annually. The rule also explicitly requires executive management and boards to exercise effective oversight of cyber risk, ensuring they are well-informed and accountable for the firm’s security posture.

NYDFS’s stricter standards are viewed as a potential blueprint for other regulators – indeed, the NYDFS noted that its enhancements could influence federal rules (the FTC, for instance, has toughened the Safeguards Rule under Gramm-Leach-Bliley in a similar spirit).

At the federal level, while there isn’t an exact analog to NYDFS Part 500, banking regulators use a combination of guidance and existing laws to drive cybersecurity resilience. The interagency FFIEC Cybersecurity Assessment Tooland the long-standing federal requirements under the Gramm-Leach-Bliley Act (GLBA) – which mandate banks to implement comprehensive information security programs – form the backbone of cyber regulatory compliance for banks.

In 2021, federal banking agencies also issued a new rule requiring banks to notify regulators within 36 hours of significant cyber incidents, underscoring the urgency of response and communication in the face of breaches. Beyond specific rules, regulators often point to frameworks like the NIST Cybersecurity Framework (CSF) as industry best practice. Notably, NIST released Version 2.0 of its Cybersecurity Framework in early 2024, the first major update since 2014.

NIST CSF 2.0 places added emphasis on governance and supply chain risk, introducing a sixth core function, “Govern,” alongside the original functions (Identify, Protect, Detect, Respond, Recover). This new Govern function highlights activities such as establishing risk management strategy and oversight, defining cybersecurity policies and roles, and managing third-party and supply chain cyber risks. Banks have widely adopted the NIST CSF as a voluntary standard, and regulators often map their expectations to its principles. By strengthening governance (e.g. board involvement in cyber strategy) and supply chain defenses, NIST CSF 2.0 aligns with regulators’ focus on ensuring that cybersecurity is not just an IT issue but an enterprise risk management issue.

NIST’s updated Cybersecurity Framework 2.0 emphasizes six core functions – adding a new “Govern” function to reinforce top-level oversight of cybersecurity (establishing risk appetite, policies, roles, and supply chain risk management). This reflects regulators’ push for stronger cybersecurity governance in financial institutions.

In practical terms, senior IT leaders should anticipate more frequent and granular scrutiny of their cybersecurity programs. Examiners are likely to ask: Is your institution following recognized frameworks (like NIST CSF)? Have you implemented multi-factor authentication, network monitoring, and regular penetration testing as expected by modern standards?

Are your board and CEO meaningfully engaged in cybersecurity oversight, receiving regular reports on cyber risks and approving key policies? Given the regulatory trend, banks that operate in jurisdictions like New York or that are subject to federal oversight should proactively upgrade their cyber controls. This includes conducting gap assessments against new rules (such as NYDFS’s amendments) and tightening any weaknesses – before regulators force the issue.

Cyber risk can no longer be treated as a purely technical matter; it is a board-level concern and a compliance obligation. The cost of non-compliance is rising, not only in terms of fines but also potential legal liability and reputational damage if an incident reveals lax controls.

Enhancing Operational Resilience and Risk Management

Closely related to cybersecurity is the broader mandate of operational resilience – the ability of a bank to deliver critical operations through disruptions. U.S. regulators have zeroed in on operational resilience in recent years, recognizing that technology failures, cyber incidents, pandemics, or natural disasters can all threaten the stability of individual institutions and even the financial system.

In late 2020, the Federal Reserve, OCC, and FDIC issued a joint paper titled “Sound Practices to Strengthen Operational Resilience,” which consolidated existing expectations for large banks.

This guidance does not introduce new rules so much as it underscores heightened supervisory expectations: banks should have robust operational risk management, business continuity planning (BCP), disaster recovery, third-party risk management, cybersecurity risk management, and incident response strategies in place.

The interagency paper highlights that recent disruptive events – from major IT outages to global pandemics – coupled with banks’ growing reliance on third-party service providers, have exposed firms to a wide range of operational risks. In other words, the regulators are saying: “We know things will go wrong – be prepared to withstand and recover from disruptions, whatever their cause.”

A concrete example of regulators’ focus here is the requirement for critical operations mapping and scenario testing. Large banks are expected to identify which business services are critical (e.g. payment processing, trading, loan servicing) and ensure they can recover those within defined timeframes if an incident occurs.

This might involve resilient architecture (such as active-active data centers, cloud failovers), well-practiced incident response playbooks, and regular simulations of extreme events. For IT leaders, an actionable takeaway is to integrate disaster recovery and BCP considerations at the design stage of systems and to routinely conduct drills (cyber range exercises, tabletop simulations for system outages, etc.).

Regulators have signaled that simply having plans on paper is not enough – they want evidence through testing and metrics that a bank could continue to operate its important business services even under adverse conditions.

Another aspect of resilience getting regulatory attention is data integrity and availability in the face of cyber threats. Federal agencies (and global standard-setters like the Basel Committee) have discussed concepts like “safe failure” – ensuring that if a cyberattack happens, there are backups and manual workarounds to maintain essential services. This overlaps with cybersecurity requirements but extends to making sure the business can run in degraded modes if necessary.

The message to senior technology officers is clear: invest in resilience now (redundancies, backups, network segmentation, etc.), because regulators will not be sympathetic to firms that are caught unprepared by foreseeable disruptions. Many of these expectations are becoming codified; for example, the OCC has integrated operational resilience into its examination handbooks, and the Federal Reserve’s supervisory stress scenarios increasingly include operational risk components.

By treating resilience as a continuous governance priority – on par with capital or liquidity – banks can better satisfy regulators and protect their customers and reputation when incidents occur.

AI Systems and Model Risk Governance

As banks accelerate their use of artificial intelligence and machine learning – for credit underwriting, trading algorithms, chatbots, fraud detection and more – regulators are scrutinizing AI through the familiar lens of model risk management and fair lending compliance.

To date, U.S. financial regulators have not issued sweeping AI-specific regulations; instead, they largely oversee AI using existing laws, guidance, and risk-based examinations. In practice this means that a bank’s AI models are subject to the same fundamental expectations as any other models that inform business decisions.

The Federal Reserve’s and OCC’s Model Risk Management Guidance (SR 11-7 and OCC 2011-12) remains the cornerstone: it calls for a rigorous model development, validation, and governance process to ensure models are sound and used appropriately.

Banks are expected to inventory their models (including AI/ML models), test them regularly for accuracy, guard against data errors, and implement strong change-control and oversight through model risk committees. Crucially, this guidance also emphasizes “effective challenge” by independent experts – meaning that AI models driving lending or trading decisions should be reviewable and explainable to risk managers, not black boxes that operate without human scrutiny.

Recent statements indicate regulators recognize certain novel risks with AI and are weighing whether additional guidance or rules are needed. A May 2025 report by the Government Accountability Office noted that financial regulators are actively assessing AI risks and may update regulations to address emerging vulnerabilities.

For example, some agencies have issued targeted guidance on AI in specific contexts – the CFPB (Consumer Financial Protection Bureau) has clarified that “black box” algorithms are not exempt from fair lending laws like the Equal Credit Opportunity Act, and lenders using AI must still be able to explain underwriting decisions to applicants. Similarly, federal banking regulators in 2021 put out a Request for Information on AI, signaling potential future rulemaking or supervisory guidance once they digested industry feedback.

To date, much of the oversight is through existing rules: if an AI model results in biased outcomes, the bank could violate anti-discrimination laws; if an AI-driven robo-advisor misleads consumers, it could trigger consumer protection actions. We also see interagency coordination on AI issues – for instance, the banking agencies and CFPB jointly finalized quality-control standards for automated valuation models (AVMs) used in mortgage appraisals, aiming to ensure these AI-driven property valuations don’t introduce bias or safety and soundness problems. That new rule (adopted in 2023-2024) requires mortgage lenders to institute controls on their AVM tools to protect against data manipulation and discrimination, including random sample testing and documentation of compliance with fair lending laws. It serves as a concrete example of regulators tailoring existing law (in this case, Dodd-Frank’s AVM mandate) to address AI-related risk.

From a governance perspective, regulators and thought leaders are urging banks to adapt their risk management frameworks for AI, rather than treat AI as an impenetrable novelty. Michael Barr, the Federal Reserve’s Vice Chair for Supervision, recently underscored that banks should review and update their model risk management standards to account for AI’s unique challenges.

Issues such as data bias, lack of interpretability, and cybersecurity of AI systems (e.g. vulnerability to adversarial attacks) need to be explicitly managed. Banks should establish internal AI governance committees or working groups that bring together IT, risk, compliance, and business unit leaders to set guidelines on responsible AI use. Key questions include: How do we prevent discriminatory outcomes in AI-driven decisions? How do we validate an adaptive machine learning model that updates itself? Who is accountable if an AI-powered service causes customer harm?

Regulators have hinted that they expect banks’ controls to keep pace with AI innovation – for example, ensuring that any AI model making credit decisions has been vetted for fairness and can be explained to examiners, or that AI used in customer service doesn’t inadvertently expose private data. A senior Federal Reserve official noted that banks remain responsible for managing the risks of any AI tools they use, including those developed by fintech partners, and should demand transparency and sound controls from those third-party providers.

In short, existing regulatory principles – fairness, transparency, accountability, and sound risk management – all apply to AI, even if the technology is new. We can expect more formal guidance on AI from regulators in the coming years, but forward-looking banks are already acting to ensure their AI deployments are compliant and well-governed under current rules.

Managing Third-Party and Vendor Risks

Modern banks rely on a vast ecosystem of third-party technology providers – from cloud computing platforms and core banking software vendors to data aggregators and fintech startups. This reliance can boost efficiency and innovation, but it also introduces significant risk, which regulators have been quick to address. In June 2023, the Federal Reserve, OCC, and FDIC issued a final Interagency Guidance on Third-Party Relationships: Risk Management, replacing prior guidance and harmonizing expectations across the agencies.

This comprehensive guidance lays out principles for managing risks at every stage of the third-party lifecycle, from initial planning and due diligence to contract negotiation, ongoing monitoring, and contingency planning for termination. A fundamental tenet of the guidance is that outsourcing a service does not transfer the risk to someone else – banks retain full responsibility for complying with laws and operating in a safe and sound manner, exactly as if they were performing the activity in-house. As the guidance bluntly states, a banking organization’s use of third parties “does not diminish its responsibility” to meet regulatory requirements and to ensure the outsourced activity is managed with appropriate controls.

This means that if a bank leverages a cloud provider or an AI analytics firm, regulators will expect the bank to have exercised robust due diligence: Is the third party financially stable and compliant with relevant regulations? Does it have strong security practices? Where is data stored and who can access it? The new interagency guidance encourages a risk-based approach – not all vendors are equal – so banks should identify which relationships involve critical activities or high risk and apply especially rigorous oversight to those. Critical activities could be those that, if the third party fails, could significantly disrupt the bank’s operations or customers (for example, a major payment processor or cloud infrastructure hosting key systems). For such relationships, regulators expect boards and senior management to be directly involved in approval and monitoring, and for banks to have contingency plans (such as the ability to quickly switch to backups or alternate providers).

Another important aspect is ongoing monitoring and resiliency of third parties. The guidance and recent OCC bulletins advise banks to continuously monitor the performance and risk indicators of their vendors – not just set and forget a contract. This includes periodic audits, reviewing the vendor’s security reports, and staying alert to any changes (like mergers or financial troubles) that might affect the vendor’s reliability. Notably, regulators are concerned about concentration risk when many banks depend on a small number of tech providers (a scenario common in cloud services).

While no formal limits exist yet, there have been discussions (even at the Financial Stability Oversight Council) about whether certain critical service providers should be subject to direct regulatory oversight, given their systemic importance. We may see movement on that front in coming years, but meanwhile banks are expected to mitigate concentration riskon their own – for instance, through multi-cloud strategies or contractual rights that protect the bank if a vendor has an outage.

For IT leaders, the elevation of third-party risk management (TPRM) means greater involvement in due diligence and vendor governance. It’s no longer purely a procurement or vendor management office task – CIOs, CISOs, and CTOs should be part of evaluating third parties’ technical controls and compatibility with the bank’s risk standards. Banks are developing more detailed questionnaires and assessment frameworks for vendors (covering cybersecurity, software development practices, disaster recovery capabilities, compliance with privacy laws, etc.).

The 2023 interagency guidance explicitly calls for tailored risk management based on the risk profile of the third-party relationship. It also highlights the role of contracts in setting expectations – contracts should clearly stipulate security requirements, incident notification timelines, audit rights, data ownership, and the third party’s obligations to support the bank’s resilience (e.g. backup and recovery for outsourced services).

If a fintech or tech vendor cannot meet these expectations, regulators expect banks to either implement compensating controls or reconsider the relationship. In essence, regulators want banks to extend their risk governance frameworks across organizational boundaries – your vendor’s failures or weaknesses are effectively yours. This is especially pertinent for AI-related third parties: if a bank is using a fintech’s machine learning platform, the bank must ensure that platform has proper controls (for bias, data security, etc.) just as if the bank built the model itself.

Adapting to Evolving Expectations: Leadership Strategies

With regulatory expectations rising across the board – in cyber, resilience, AI, and third-party oversight – senior IT leaders in banking need to take proactive steps to keep their institutions ahead of the curve. Here are several strategies and best practices for navigating the evolving risk governance landscape:

  • Build a Strong Risk Governance Culture: First and foremost, IT executives should champion a culture where risk management is embedded in technology initiatives. This means ensuring that IT strategy and risk appetite are aligned – innovative projects should undergo risk assessments and receive appropriate sign-offs from risk and compliance teams. Engage the board’s risk committee regularly on technology matters; provide them with education on emerging IT risks (like AI ethics or cloud security) so they can exercise effective oversight. Regulators have made it clear that boards and senior management must be actively involved, so facilitate that involvement with clear reporting on key risk indicators, incidents, and remediation plans.

  • Align with Regulatory Frameworks and Standards: Use established frameworks as a guide to self-regulate before the examiners arrive. For cybersecurity, map your program to NIST CSF 2.0 and identify any gaps in the new Govern function areas (e.g. do you have a defined cybersecurity risk management strategy? Are you managing supply chain risks actively?). For operational resilience, follow the interagency sound practices: identify critical operations, set impact tolerances (how much downtime is acceptable), and test your capabilities to meet those tolerances under various scenarios. In model risk and AI, adhere to the principles of SR 11-7 by maintaining an inventory of all important models/algorithms, documenting their purposes and assumptions, and validating their performance regularly. Demonstrating alignment with such well-known standards will not only satisfy regulators but also improve internal risk posture.

  • Invest in Controls and Testing: Where new regulations have upped the ante, be ready to invest accordingly. If you fall under NYDFS’s jurisdiction (or even if not, as a matter of best practice), consider arranging independent cybersecurity audits and enhanced penetration testing beyond the minimum – these can uncover blind spots and also show regulators you take security seriously. Implement advanced technical controls that are becoming industry standard: for example, privileged access management tools to closely govern administrative accounts (which NYDFS now explicitly requires for large firms), and robust monitoring and analytics for unusual network behavior. Likewise, regularly test both your cyber defenses and your business continuity plans. Schedule simulations for cyberattacks (ransomware drills, data breach tabletop exercises) and for operational disruptions (cloud provider outage drills) to practice your response. Regulators increasingly ask for evidence of such testing, and they value forward-looking firms that learn and improve from test results.

  • Strengthen Third-Party Oversight: Given the new third-party risk guidance, review and, if needed, enhance your vendor risk management program. Categorize vendors by criticality and risk, ensure you have a complete inventory of third-party services, and periodically refresh due diligence. For critical tech partners, consider onsite assessments or require SOC 2 Type II reports and penetration test results as part of the contract. Ensure contracts include clauses that regulators expect: for instance, the right to audit the vendor, requirements for timely breach notification (NYDFS mandates 72-hour notice for cyber events, which you may want your vendors contractually obliged to support), and clarity on data ownership and security responsibilities. From an IT standpoint, also prepare contingency plans: if Vendor X’s system went down tomorrow, do we have a backup process? Could we quickly cut over to an alternate provider or bring the service in-house temporarily? Demonstrating such preparedness will give both management and regulators confidence that third-party risks are under control

  • Embed AI Governance and Ethics: For institutions deploying AI, create a governance framework specifically for AI and advanced analytics. This could involve an AI oversight committee that includes IT, data science, compliance, and business leaders. Develop guidelines for AI model development and usage – for example, requiring bias testing on any AI that influences customer outcomes, setting standards for model explainability, and defining what human oversight is needed (a “human-in-the-loop” for certain high-stakes decisions). Keep abreast of evolving best practices (such as NIST’s AI risk management framework) and be prepared to explain to regulators how your AI models comply with existing laws (fair lending, consumer protection, etc.). In parallel, consider AI risk training for your staff and potentially your board – if executives will be accountable for AI outcomes, they need to understand the basics of how these tools work and what risks to watch for. The goal is to use AI beneficially but responsibly, with control structures that are as rigorous as those for traditional processes. If regulators see a bank using AI in a “Wild West” fashion – without documented controls or understanding – they will likely intervene. It is far better for the institution to self-police and document its risk mitigations.

  • Stay Informed and Engage with Regulators: Finally, senior IT and risk leaders should keep a close eye on regulatory developments. The landscape is still evolving – for instance, the Federal Reserve and other agencies are actively studying whether additional guidance on AI or cloud concentration risk is needed. By monitoring regulatory releases, participating in industry forums, and even commenting on proposed rules, banks can anticipate what’s coming. Proactive dialogue with regulators can be especially valuable: if your bank is trying something novel (say, deploying a generative AI chatbot), discussing your risk approach with examiners beforehand can build trust and possibly help shape future guidance. Regulators have encouraged public-private collaboration on emerging tech risks. Taking them up on that – by sharing insights and concerns – not only contributes to better policy but positions your institution as a leader in responsible innovation.

Conclusion

The march of technology in banking – from digitization of services to adoption of AI – is met step-for-step by evolving regulatory oversight. U.S. regulators are intent on ensuring that as banks innovate, they do so safely, soundly, and in compliance with a complex web of risk regulations. Cybersecurity rules are tougher, operational resilience expectations are higher, AI is under the microscope through existing risk and fairness standards, and third-party risk management is more critical than ever.

For senior IT leaders, this means the job is no longer just about delivering technical capabilities, but also about steering those efforts within a strong risk governance framework. By embracing a proactive, comprehensive approach – one that treats regulatory compliance not as a checkbox but as a baseline for excellence – banks can meet regulators’ rising expectations and turn robust risk management into a competitive advantage.

The institutions that thrive will be those that can innovate with confidence, having built the controls, culture, and resilience needed to navigate both the opportunities and the risks of banking technology in the 2020s. In a landscape where change is the only constant, savvy IT and risk leaders will continue to adapt, learn, and lead their organizations in lockstep with the changing cadence of regulatory risk governance.

Sources: Recent regulatory and industry publications, including federal supervisory guidance and rules, state regulations, and expert analysis from

federalreserve.gov

federalregister.gov

corpgov.law.harvard.edu

wilmerhale.combalbix.comgao.gov

federalreserve.gov.