He is now out of government and has resumed writing his Substack. Here is one excerpt from his latest:
Several states have banned (see also “regulated,” “put guardrails on” for the polite phraseology) the use of AI for mental health services. Nevada, for example, passed a law (AB 406) that bans schools from “[using] artificial intelligence to perform the functions and duties of a school counselor, school psychologist, or school social worker,” though it indicates that such human employees are free to use AI in the performance of their work provided that they comply with school policies for the use of AI. Some school districts, no doubt, will end up making policies that effectively ban any AI use at all by those employees. If the law stopped here, I’d be fine with it; not supportive, not hopeful about the likely outcomes, but fine nonetheless.
But the Nevada law, and a similar law passed in Illinois, goes further than that. They also impose regulations on AI developers, stating that it is illegal for them to explicitly or implicitly claim of their models that (quoting from the Nevada law):
(a) The artificial intelligence system is capable of providing professional mental or behavioral health care;
(b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or
(c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care.
First there is the fact that the law uses an extremely broad definition of AI that covers a huge swath of modern software. This means that it may become trickier to market older machine learning-based systems that have been used in the provision of mental healthcare, for instance in the detection psychological stress, dementia, intoxication, epilepsy, intellectual disability, or substance abuse (all conditions explicitly included in Nevada’s statutory definition of mental health).
But there is something deeper here, too. Nevada AB 406, and its similar companion in Illinois, deal with AI in mental healthcare by simply pretending it does not exist. “Sure, AI may be a useful tool for organizing information,” these legislators seem to be saying, “but only a human could ever do mental healthcare.”
And then there are hundreds of thousands, if not millions, of Americans who use chatbots for something that resembles mental healthcare every day. Should those people be using language models in this way? If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all? Up to what point of mental distress? What should or could the developers of language models do to ensure that their products do the right thing in mental health-related contexts? What is the right thing to do?
The State of Nevada would prefer not to think about such issues. Instead, they want to deny that they are issues in the first place and instead insist that school employees and occupationally licensed human professionals are the only parties capable of providing mental healthcare services (I wonder what interest groups drove the passage of this law?).
The post Dean Ball on state-level AI laws appeared first on Marginal REVOLUTION.