06.10.2024 09:00

Child Welfare Worker in Trouble After Using ChatGPT to Submit Bizarre Report About Sex Doll

News image

Hello!

An Australian child services worker used ChatGPT to write a sensitive report about a child — and ended up getting the tech banned.

As The Guardian reports, the Department of Families, Fairness and Housing (DFFH) in Victoria, Australia is at the center of a firestorm after one of its employees submitted a report drafted with the OpenAI chatbot.

Child Welfare Worker in Trouble After Using ChatGPT to Submit Bizarre Report About Sex DollThe case in question revolved around a child whose parents were being charged with unrelated sexual offenses. Particularly, it was said that the child's father had used a doll sexually — but when the worker submitted their report, mentions of the doll were cast in a bizarrely positive light.

"The report described a child’s doll — which was reported to child protection as having been used by the child’s father for sexual purposes — as a notable strength of the parents’ efforts to support the child’s development needs with 'age-appropriate toys,'" the Office of the Victorian Information Commission (OVIC) explained in a statement.

Beyond the strange description of the doll, the report was also written with "inappropriate sentence structure" and language that didn't fit the agency's guidelines. Upon closer analysis, the child welfare office surmised that the worker had used ChatGPT to draft their report.

Widespread Use

Child Welfare Worker in Trouble After Using ChatGPT to Submit Bizarre Report About Sex DollUpon confrontation, the employee admitted that they'd used ChatGPT to "save time and to present work more professionally," but said that they didn't enter client information into the program.

The Victoria DFFH also found upon further investigation that the worker, who was not named, may have used ChatGPT in 100 other child protection-related documents. Although the employee insisted they didn't enter sensitive information into the chatbot, their coworkers alleged that they did.

Beyond the case in question, which ultimately resulted in the worker no longer being employed with the DFFH, the agency also found that between July and December 2023, nearly 900 of its employees had gone to the ChatGPT website.

In response to those findings, OVIC ordered the child welfare agency to ban its staffers from using the software — though notably, the Victorian information czar did not ban its use wholesale in the state.

Child Welfare Worker in Trouble After Using ChatGPT to Submit Bizarre Report About Sex Doll"The deputy commissioner believes there may be some specific [AI] use cases where the risk is less than others," OVIC noted in its statement, "but that child protection, by its nature, requires the very highest standards of care."

As far as we can tell, this is the first time someone's been caught and gotten in trouble for using ChatGPT in a welfare setting — and unfortunately, it probably won't be the last.

Thank you!
Join us on social media!
See you!


0 comments
Read more