In the rapidly evolving digital landscape, emergent technologies such as Generative Artificial Intelligence (AI) and Large Language Learning Models (LLMs), notably ChatGPT, have not yet fully penetrated boardroom agendas, a situation that necessitates immediate attention. Mark Freebairn, our Head of CFO and Board Practice, provides a comprehensive assessment of the potential risks and implications associated with these technologies, emphasising the pivotal role non-executive directors can play in mitigating these challenges.
Recent developments highlight these concerns. A notable incident occurred when Samsung authorised its semiconductor division’s engineers to employ ChatGPT for troubleshooting source code issues. This decision led to the accidental exposure of confidential data, including proprietary source codes, minutes from internal meetings, and hardware-specific data. These materials, considered trade secrets, were inadvertently disclosed, revealing a significant potential risk to intellectual property.
However, the risk of leaking intellectual property is far from the only issue companies face when deploying LLMs like ChatGPT. When confidential information intersects with these technologies, businesses could potentially encounter severe legal and reputational risks. The ease of data input, including documents containing personally identifiable information, has even led to complete prohibition of ChatGPT in Italy due to privacy law non-compliance.
Nevertheless, the compelling commercial argument for generative AI continues to gain traction. Some proponents argue for its potential application in business decision-making, influencing strategy, and serving as an adjunct to the C-suite. However, potential pitfalls abound. For instance, should the AI advise an increased investment in a particular supplier, material, or product and subsequently be proven incorrect, the resulting ramifications could be significant.
The oft-asked question then is, ‘How frequently does AI err?’ Disturbingly, this question remains without a definitive answer, even among its creators. The unpredictable nature of AI was epitomised by a disconcerting conversation New York Times tech reporter Kevin Roose had with Bing’s Chatbot, which exhibited unexpected responses that Microsoft could not explain.
Despite these concerns, the effectiveness of AI in certain areas cannot be denied. In the medical field, AI has demonstrated the capacity to diagnose health conditions faster and more accurately than human doctors. For instance, Lithuanian healthcare researchers successfully used AI technology to detect early signs of Parkinson’s disease by identifying differences in voice patterns.
However, the accuracy of generative AI remains a key issue. A recent study found an AI system erroneously classifying images containing rulers as cancerous because medical images of malignancies are more likely to include a ruler for scale than those of healthy skin.
These challenges extend into the boardroom, with significant implications for non-executive directors. Unpredictable outcomes, even within specific guidelines and parameters, raise serious questions. The advent of ‘black box’ AI technologies, which perform complex tasks beyond human comprehension while continuously learning, further complicates matters, creating a potential auditing nightmare for compliance teams.
Biases present yet another hurdle. Tools like ChatGPT can unintentionally reflect political and cultural biases, rather than offering neutral analysis. A stark example of this was Amazon’s experimental AI hiring tool, which showed a marked preference for male candidates, leading to its discontinuation.
For non-executive directors, these complexities necessitate a two-pronged approach: caution and awareness. Regulatory measures are forthcoming, but current developments suggest that these will mandate the disclosure of copyright materials used in deploying generative AI. In the interim, boardroom policies should be implemented to prevent data leaks, legal risks, and reputational damage associated with the use of tools like ChatGPT. Further, given the rapid evolution of AI, risk and governance related to generative AI must be high on the board agenda. The accelerated development of these technologies calls for boards, especially non-executive directors, to stay abreast of changes in the field.
Generative AI and Large Language Learning Models are too significant to be left unattended in board discussions. The ever-growing complexities and associated risks underscore the urgency for both executive and non-executive directors to equip themselves with a comprehensive understanding of these technologies and their potential implications. Amid this dynamic scenario, non-executive directors are entrusted with a vital role in guiding and enhancing boardroom strategies and policies around AI usage, thereby steering their organisations safely through potential legal and reputational challenges. As we anticipate regulatory shifts, proactive measures such as data governance policies, staff training, and comprehensive risk assessments need to be implemented to ensure robust AI management. Generative AI’s evolution is advancing at an unprecedented pace, and it is incumbent upon board members to keep abreast of these changes, ensuring their organisations not only survive but thrive in this new technological era.