A seemingly innocuous request led Claude to write and deploy a script that likely violates Anthropic’s Terms of Service, without giving any indication of this potential problem. When confronted with the issue, Claude first wound up some spin, then proposed a solution twenty times more expensive than necessary, and finally, after some prodding, developed an economical, ToS-compliant script.
This paper documents the full conversation — from the original letter to Anthropic’s AI support agent Fin, through Fin’s response citing the Terms of Service, to the interactive Claude Code session where the task runner was reimplemented to be compliant.
