• Blog
  • Understanding the Ethical Risks of AI in Financial Services

Understanding the Ethical Risks of AI in Financial Services

Jim Bretschneider
10 Aug, 2023

How does AI work?

Understand some of the potential ethical issues AI poses for financial services institutions and what technology leaders have agreed to provide as safeguards.

Artificial Intelligence (AI) has become one of the most talked about technologies these days, and for good reason: as the technology matures, it is finding application across the entire spectrum of human endeavors, from science to art…even financial services.

AI emulates human decision-making by consuming vast amounts of data and applying algorithms that arrive at an appropriate decision based on past good and bad decisions. The algorithm is tested by applying new data and measuring the machine’s responses.

Over time, the model “learns” the difference between a good decision from a bad decision based on the algorithm and live human interaction, offering feedback about the quality of its decisions. At the same time, AI designers are constantly adjusting the learning algorithms and models to ensure optimal responses.

AI is currently being used in a number of financial services applications, including managing asset allocations, assessing portfolio risk, qualifying loan applications, and automated chatbots that recommend services based on client needs. The benefits of AI promise to help financial services companies reduce costs, maximize client portfolio values, and accurately assess the creditworthiness of loan applicants.

There are significant benefits to employing AI within financial services such as:

  • personalized customer experience,
  • risk management and fraud detection,
  • enhanced data analysis, and
  • operational efficiency and cost reduction.

But with great power comes great responsibility.

What are potential ethical problems that must be addressed?

  • 1

    Transparency

    It is nearly impossible to understand how complex AI algorithms work, particularly proprietary algorithms from third-party developers. This lack of transparency can be problematic for regulators, clients, even the company itself, to understand why a questionable transaction was made.

  • 2

    Accountability

    Along with the potential lack of transparency, complex decision-making algorithms can make it difficult to assign responsibility and hold an entity accountable if errors or mistakes occur.

  • 3

    Privacy

    AI learns by assimilating vast amounts of data. This can involve inadvertently including sensitive or personal information in the financial services industry. Data privacy and security is critical to prevent misuse and unauthorized access to sensitive information.

  • 4

    Bias

    Created by feeding data to a model, the algorithm is affected by the bias of the data going into it. This can take many forms, such as a predisposition to specific asset classes or trading strategies. Bias can also take the form of discriminatory practices applied to marginalized demographic groups when evaluating loan or insurance applications.

  • 5

    Misplaced Dependence

    AI is a tool for decision-support. Placing too much reliance on the tool without sufficient human oversight can mean missing poor decisions, potentially resulting in lost customers and regulatory fines.

  • 6

    Security

    All systems are vulnerable to attack, and AI is not immune. In addition to cyber threats like theft of secure data or ransomware attacks, firms should be vigilant to identify when bad actors try to gain access to AI systems to manipulate the AI models, which can result in erroneous or fraudulent transactions.

  • 7

    Systemic Risk

    As AI becomes more widespread, it is possible that multiple institutions will adopt the same or similar AI tools, negatively impacting the industry. For example, markets can be adversely affected if numerous firms use the same AI decision to drive markets in one direction.

The future of AI: Addressing ethical concerns

Microsoft has published six recommended areas for the ethical use of AI: Fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

In addition, Microsoft, along with Amazon, Google, Meta, and OpenAI, have agreed to the following new safeguards for AI:

  • Using watermarking on audio and visual content to help identify content generated by AI.
  • Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
  • Sharing trust and safety information with the government and other companies.
  • Investing in cybersecurity measures.
  • Encouraging third parties to uncover security vulnerabilities.
  • Reporting societal risks such as inappropriate uses and bias.
  • Prioritizing research on AI’s societal risks.
  • Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.

HSO can help

Along with its many benefits, AI poses a number of ethical challenges for financial services institutions. These institutions should work with industry leaders, technical experts, regulators, and other stakeholders to establish recommended guidelines for the ethical use of AI in the financial services industry.

Navigate AI with HSO by your side.

Consulting Offering

Empower Your Organization's Future with HSO's AI Briefing

Engage with experts, uncover potential, and shape your AI adoption journey for success

Get started

Learn more about how we support Financial Services firms

Ready to take a step into the future with AI? Talk to us.

By using this form you agree to the storage and processing of the data you provide, as indicated in our privacy policy. You can unsubscribe from sent messages at any time. Please review our privacy policy for more information on how to unsubscribe, our privacy practices and how we are committed to protecting and respecting your privacy.