The Defense Department’s chief digital and AI officer, Craig Martell, expresses concern about the potential havoc that large language models and other generative AI agents could cause across society due to inaccuracies and the lack of safeguards, calling on industry to work on detection to differentiate AI-generated content from humans.
Craig Martell, the Defense Department’s chief digital and AI officer, has expressed concerns about the potential havoc that large language models and other generative artificial intelligence agents like ChatGPT could cause across society. Martell worries about how people might use these tools and the fact that they don’t always produce factually sound content since they pull from human-created sources. He warns that we trust these tools too much without the appropriate safeguards in place to validate the information. Adversaries seeking to run influence campaigns targeting Americans could use these tools for disinformation. Martell calls on industry to work on detection so that users and consumers of content can more easily differentiate AI-generated content from humans. Although not everyone in the Defense Department shares Martell’s apprehension about AI, he cautions against being too enthusiastic about its promise, particularly AI tools for labeling data. In Martell’s view, AI is a highly human-driven asset, and the systems must be monitored to ensure they bring the value that they were paid for in the first place.