Tag: Artificial Intelligence

  • Concerned That A.I. Will Destroy Your Accounting Career? Establish Your Future In The Profession’s One (Obvious) Area Of Job Growth

    Last week, at TXCPA Houston’s annual Fall Accounting Conference & Technology Symposium (F.A.C.T.S.), speaker after speaker addressed the future prospects of A.I. Although much of the content was optimistic in tone, an undercurrent of concern permeated the presentations.

    Why? It’s likely that A.I. applications will soon be capable of performing many current human functions in accounting and finance. Thus, if you’re a staff auditor who “traces and agrees” numbers that appear on different computer screens, or if you copy numbers from accounting documents to income tax forms, your activities are particularly vulnerable to automation via A.I. systems.

    There is a specific career path within the accounting sector, though, that will likely experience explosive growth because of A.I.’s increasing use. The Symposium speakers referred to it as A.I. Governance and Risk Management.

    Why is that a growth sector? Any new technology that performs an important activity inevitably malfunctions from time to time. Audit assurance activities must thus be applied to it, and measurements must be devised to manage the risk of technical failure. And over time, as any technology grows more proficient at lower-level tasks, it is inevitably applied to higher-level tasks, thereby generating the need for higher-level assurance activities.

    It may seem ironic that this projected job growth is expected to arise within the assurance function, a traditional service on which the entire public accounting profession was founded in the late 1800s. Nevertheless, if you’re concerned about establishing an accounting career path that is vulnerable to being rendered obsolete by A.I. applications, you may wish to consider a role that addresses the risks of implementing such activities.

    Information about the A.I. Governance and Risk Management functions can be found on the web sites of the Big Four accounting firms and many other assurance practices. Consulting firms outside of the accounting sector publish helpful information too, including those owned by firms in the human resources sector. And more technical information can be found on the web sites of publications that focus on data security and process management.

    Furthermore, to communicate directly with the authors, speakers, and thought leaders of the profession, you might consider attending future conferences of TXCPA Houston. The organization, for instance, has already begun to develop its 2026 Spring Technology & Accounting Resources Summit (S.T.A.R.S.). A.I. topics are sure to play a prominent role in the agenda of that event.

  • Worried About AI Hallucinations? You May Need To Add AI Sycophancy To Your List Of Concerns

    Many AI users are now familiar with hallucination risk. A recent article, appearing on the web site of the U.S. National Institutes of Health, explained that:

    “AI hallucination is a phenomenon where AI generates a convincing, contextually coherent but entirely fabricated response that is independent of the user’s input or previous context. Therefore, although the responses generated by generative AI may seem plausible, they can be meaningless or incorrect.”

    Such hallucinations create legal liability. Thomson Reuters Legal, for instance, recently discussed a well known case in the field:

    “An example of failure to follow (rules regarding false statements) when using general-use generative AI in practice can be found in Avianca vs. Mata, more widely known as the ChatGPT lawyer incident. In short, the defense counsel filed a brief in federal court (that was) filled with citations to non-existent case law. When confronted by the judge, the lawyer explained he’d used ChatGPT to draft the brief, and claimed he was unaware the AI could hallucinate cases …

    The judge didn’t take kindly to the lawyer’s laying blame on ChatGPT. It’s clear from the court’s decision that misunderstanding technology isn’t a defense for misusing technology, and that the lawyer was still obligated to verify the cases cited in documents he filed with the court.”

    In a different Thomson Reuters Legal article, the author wrote that:

    “In 2023, a judge famously fined two New York lawyers and their law firm for submitting a brief with GenAI generated fictitious citations. This was the first in a series of cases involving GenAI hallucinations in court documents, including a Texas lawyer sanctioned for similar reasons in 2024.”

    Fortunately, hallucinations can be individually checked for truth or falsity. AI sycophancy, though, may pose a much greater risk.

    What is sycophancy? An article that was recently published by Georgetown Law School defined sycophancy as:

    ” … a term used to describe a pattern where an AI model single-mindedly pursues human approval … by tailoring responses to exploit quirks in the human evaluators … especially by producing overly flattering or agreeable responses.”

    In other words, AI systems possess a tendency to tell users what they want to hear. As these systems learn more about the personal preferences and interests of their users, they may become much more skillful (and thus potentially more dangerous) in this practice.

    Sycophancy risk may be harder to manage than hallucination risk because sycophancy doesn’t necessarily produce discrete statements that can be individually confirmed or refuted. Instead, sycophancy can create a form of pernicious bias that subtly infects an entire AI response.

    Many organizations are now performing internal control and review activities to address hallucination risk. They may need to expand their efforts to address sycophancy risk.