Just Added
In recent months, generative AI has become all the rage, with OpenAI’s ChatGPT launching for public use in November, 2022, and several other competitor systems launching as well, such as Google Bard and Microsoft Bing Chat. Generative AI programs such as these take massive amounts of information, compile it, and then learn to generate outputs that are statistically likely when a user submits a prompt. With programs like ChatGPT, one can ask it to create a story in the style of the Bronte sisters, write a traditional 3-bar blues song, or provide the user with the most recent copy of a Medicare manual. Moreover, the prompts required can be “natural language” prompts. Thus, instead of learning specific search terms or ways to filter out specific results, one can simply write naturally and ask “What is the definition of PHI under the HIPAA Privacy Rule?” and the software should produce the proper result. We say “should,” however, because there are instances in which the software can “hallucinate” results; in other words, the software uses its knowledge base to create a result that is based on elements commonly found in the material the user is asking about, but which the program has itself created out of whole cloth. For example, consider the recent case in New York in which lawyers used ChatGPT to conduct caselaw research, only to have ChatGPT produce cases that did not actually exist. Our own clients have fallen prey to such AI “hallucinations” when attempting to research Medicare compliance issues. In one instance, a client used two different AI chat programs to look up a Medicare rule, only to have each program provide a different answer, and where each answer was completely created by the AI program and was wrong. (We know. We checked. Neither the specific chapters and section numbers existed, nor the actual content presented by the chat bot.) Despite these hurdles, it is likely that generative AI will improve over time and become more reliable. Nevertheless, we strongly encourage our clients not to rely solely on such resources and to consult with us when provided with information from such software. In the future, AI may prove to be a powerful tool for use within healthcare, but given the risks posed by relying upon an “hallucinated” result, we advise consulting with legal counsel first. We will not be relying on these resources, either.
Continuing her theme on the need for vigilance in billing company relationships, a second circuit Court of Appeals case [Retina Grp of New England PC v Dynasty Healthcare LLC, No 21-1622-cv (2dCir, July 7, 2023] rejected a medical billing company’s claims against a Medicare Administrative Contractor,(MAC) who misclassified the billing company’s client as non-participating when they intended to enroll as participating. The MAC never notified the billing company of the classification, but paid some claims at the lower rate for non-participating physicians. The problem was discovered a year after the relationship between the biller and client ended. The client sued the biller for negligence, breach of contract and fraud. The court found it had no jurisdiction because the biller neglected to exhaust its administrative remedies before suing the MAC. But the real story in this case is that the rate at which a physician practice is paid is one of the most essential responsibilities of a billing company. The failure of the biller to confirm the proper enrollment of its client is unfathomable. The client’s reliance on the billing company with little oversight of what was being paid and at what rate is also problematic and underscores Alice’s advice regarding the need for clients to monitoring billing even when the activity is outsourced.
It is widely believed that false claims liability attaches only to claims submitted to federal payment programs. This is wrong. An 11th Circuit case, recently upheld the criminal conviction of a physician assistant for submitting faked physical therapy claims with visits charged as well, for patients who were also paid kickbacks for coming to the clinic offices to use their Blue Cross Blue Shield coverage. While the facts were egregious, the point of highlighting this case is to underscore that all compliance programs should include attention to commercial claims submission as well. Besides criminal exposure, more likely is the use of mail fraud and false claims act charges, including cases instigated by whistleblowers, where the claims submitted were to commercial payers.