As AI technologies quickly evolve, regulatory frameworks struggle to keep pace. Investors cannot rely on established regulation to ensure responsible development and use of AI. This is why the abrdn objective is to work with the companies in which we invest to encourage a future where Artificial Intelligence (AI) delivers sustainable benefits for shareholders and other stakeholders.

Heightened investor scrutiny of AI practices has become evident in shareholder resolutions filed at the annual meetings of several companies - from technology giants to entertainment businesses. These annual meetings presented an opportunity for us to connect our research on AI with targeted engagement, voting and, where necessary, public statements to encourage change.

AI has rapidly emerged as a transformative force with the potential to greatly benefit users and society, but it also poses risks. AI has no concept of the value of materials or ethics. Without clear governance and oversight its outcomes may diverge from important, qualitative objectives and threaten sustainable value-creation. 

That is why we consider it crucial that companies with significant exposure to AI demonstrate:

  • robust governance and oversight

  • ethical guidelines
  • appropriate due diligence

  • transparent practices

Where AI is likely to have significant impact on operations and labour relations, we believe it is prudent for companies to demonstrate a responsible approach at the earliest opportunity. Collaborating with the workforce can enable companies to mitigate negative outcomes and avoid costly disruption to labour relations.

Governance and Oversight

Some companies may be able to demonstrate robust governance and oversight mechanisms through existing structures. Those with particularly large and complex operations, involving both development and use of AI technologies, could benefit from dedicated governance structures. We considered Amazon to be such a company and supported a shareholder resolution requesting the creation of a board committee on AI. We also made a public statement to encourage the company and other investors to consider how governance and oversight could be developed. Governance and oversight structures always need to reflect a company’s circumstances. In some circumstances, a dedicated structure for AI could consolidate and enhance oversight, helping a company to ensure a consistent approach is applied across its operations.

Ethical Guidelines

Ethical guidelines, or responsible AI principles, offer companies another way to ensure consistency. They set out the principles guiding how a company will develop and deploy AI in a way that is ethical, responsible, and trustworthy. In February, we voted for an AI resolution at Apple. The resolution asked Apple to prepare a transparency report and disclose any ethical guidelines that the company has adopted regarding its’s use of AI technology.

We met Apple and the resolution proponent to discuss their opposing arguments in more detail. In our view, the company is exposed to various risks associated with AI and the requested disclosure, including ethical guidelines, could provide shareholders with evidence of an approach that can protect long-term value.

We were concerned that Apple had not given an indication of when it would disclose ethical guidelines. To reinforce our message to the company we made a public statement in support of the resolution. Although the resolution failed to pass, we were encouraged by the notable level of support it received.

Due Diligence

Robust governance, oversight and ethical guidelines need to be accompanied by rigorous due diligence to be truly effective. Conducting due diligence allows companies to identify and address risks. This may slow down aspects of development, implementation, and launch; however, if it enables a bold idea to be delivered responsibly, that is a price worth paying. This was a key topic in our engagement with Meta.

Advertising is Meta’s primary source of revenue.  The use of personal and behavioural data in targeted advertising exposes users to the risk of privacy violations, while algorithms may unintentionally encourage bias and discrimination. When we met the company, we discussed how it uses AI in targeted advertising.  AI presents opportunities for Meta to deliver targeted advertising in a more profitable way, but its capabilities also pose risks as scale and complexity increase. We recognise these opportunities but maintain that an independent review of the company’s risk management in this area would provide investors with a proportionate level of assurance and support sustainable value creation.

The company has also faced high-profile controversies regarding mis and disinformation, and the use of generative AI gives rise to new issues. Meta is clearly taking some steps to manage risk through mechanisms for content removal, content identification and labelling. Nonetheless, after our engagement we remained concerned that that Meta is insufficiently prepared to manage the potential volume of third-party, AI generated content. The risks associated with this, in a key year for democratic elections around the world, are well documented. 

Our research and engagement led us to support resolutions that would address these concerns and that would demonstrate to investors how Meta’s AI due diligence is building on its governance and oversight mechanisms and Responsible AI Pillars to protect shareholder interests.

Transparency

There is a common thread that underpins the principles discussed above – transparency. Without it, investors are unable to understand the approach a company is taking. Clearly there are limits – some information will be commercially sensitive. It is also important to acknowledge that reporting standards for AI are limited and that disclosures will have to evolve to keep pace with technological developments. But these are also the factors that makes transparent reporting so crucial. Without voluntary transparency, there is a risk that companies will be subject to burdensome regulation.

We have discussed AI transparency with several companies and were encouraged by their interest in investor views and desire to make disclosure useful and efficient. Microsoft has disclosed extensive information on its approach to AI and we are pleased to have been able to provide feedback on how its pioneering Responsible AI Transparency Report could evolve.

Labour Relations

To support the adoption of AI, companies may also need to consider the impact on the workforce. As use of AI becomes more widespread, non-technical staff will require training to understand the opportunities, limitations, and ethics of its use. Like those impacted by the energy transition, workers may also require access to retraining to adapt to the changing labour market. 

The entertainment industry has already witnessed debate and disruption due to concerns over the use of AI in film and television production, resulting in the Hollywood strikes of 2023.  Several entertainment companies received shareholder resolutions on AI use as a result. This serves as a cautionary example of how apprehension about the role of AI can result in disruption for businesses. There appears to be merit in demonstrating a responsible approach to the adoption of AI as quickly as possible to reassure key stakeholders. We used our engagement and voting to encourage this approach at selected companies in the sector. Ultimately, collaborating with the workforce will help companies to realise the full potential of AI.

What next?

Heraclitus, an ancient Greek philosopher, said “there is nothing permanent except change”.  This certainly appears to be true when we consider the AI landscape. Companies face a significant and evolving challenge in adapting to, harnessing, and mitigating the risks of AI.  As investors, our task is to understand how we can support and collaborate with companies to help them meet this challenge.     

A focus on robust governance and oversight, ethical guidelines, appropriate due diligence, and transparency will continue to define the abrdn approach.  As the technology develops, we believe these issues will remain crucial to the responsible development and use of AI.