As generative AI tools become fixtures in boardrooms in the context of surfacing risks or responding to director queries, should we ask whether the role of the human director is subtly (or seismically) shifting? If artificial intelligence (AI) becomes a core influence on deliberations, are boards still exercising judgment? What happens when that judgement is overtaken by AI? And what about the prospect of even appointing an AI ‘director’ to the board?
The idea of an AI ‘director’ might sound like science fiction however, in my view, some legal precedents suggest it may not be entirely implausible.
In Australia’s Buzzle Operations Pty Ltd (in liq) v Apple Computer Australia Pty Ltd [2011] NSWCA 109, the Court explored the concept of a company (not a human being) acting as a shadow or de facto director. In this case, the Court indicated that while a company can’t formally be appointed to a board, a company’s influence may potentially be so great that it is, in effect, exercising control over board decisions. This raises an interesting possible future: what if a board begins relying so heavily on an AI tool that its advice starts to hijack key decisions? What if directors begin deferring to AI? Could directors be inadvertently walking into a situation where they end up dealing with a (more) modern-day shadow director?
As boards lean into AI-generated insights, summaries, and even strategic prompts within board packs, the deeper governance question becomes: who is really making or influencing the board’s decisions? If directors rubber-stamp AI recommendations, it can start moving into murky territory both ethically and legally and, in my view, within the realm of the (human) directors being derelict in their duties.
For those who are tempted, even if the idea of an AI director seems attractive to a tech-forward organisation, there will be difficulties ascertaining how any AI director complies with its fiduciary duties, as well as identifying by what means will the AI be held responsible for any breach of those duties.
Before going down that rabbit hole, there’s a hard legal limit: the law and most constitutions do not allow for the appointment of an AI director. Company constitutions typically require directors to be natural persons with legal standing in line with the Corporations Act 2001 (Cth) requirements. Also, when it comes to the related residency requirements for both proprietary and public companies, AI will not be able to qualify. So even if the function of a director could be mimicked, the legal appointment of an AI director cannot be made without legal challenges.
So if not AI directors, then what?
Any organisation seriously considering embedding AI into its governance structure, whether as a tool, a virtual observer, or something more ambitious, first must ask:
- Does our legal environment permit this?
- What are the benefits and disbenefits of using AI in this way?
If you cannot answer the first question above positively, you would be operating outside your governing framework before the experiment even begins.
But functionally – well, that is where it gets interesting.
If we are not talking about an AI having voting rights or fiduciary duties, we start exploring whether, in future disputes, the AI digital footprint may carry as much weight (if not more) than the directors themselves.
Here’s where the Buzzle case comes into play once again!
Part of the judgment dealt with discrepancies between directors’ recollections and footage recorded by a documentary film crew. The Court was asked to consider the documentary recordings as ‘better evidence’ than directors’ memories. This is not a question about shadow directors, but about the documentary/contemporaneous record overtaking the human memory.
Now imagine that instead of a documentary crew, your board has an AI tool logging everything; from transcripts, metadata, decision trails, individual director input, etc. What happens when a director says, “I don’t recall that,” and the AI has perfect recall? What happens when the AI becomes the evidentiary source of truth?
That changes the governance dynamic. It could put pressure on directors to defer less to instinct and more to the documentary trail – especially if that trail is generated by a non-human, non-accountable party. We could be heading toward a world where directors are judged not only on their decisions, but on whether they agreed or disagreed with ‘the AI.’
If we give AI the power to shape decisions and the responsibility of recording them, how long before we treat it like it has a seat at the table?