
Last month, OpenAI released an use research study revealing that almost 15 percent of job-related discussions on ChatGPT needed to handle “making decisions and solving problems.” Now comes word that a minimum of one top-level member of the United States armed force is utilizing LLMs for the exact same function.
At the Association of the United States Army Conference in Washington, DC, today, Maj. Gen. William “Hank” Taylor apparently stated that “Chat and I are really close lately,” utilizing a distressingly familiar small label to describe an undefined AI chatbot. “AI is one thing that, as a commander, it’s been very, very interesting for me.”
Military-focused news website DefenseScoop reports that Taylor informed a roundtable group of press reporters that he and the Eighth Army he commands out of South Korea are “regularly using” AI to improve their predictive analysis for logistical preparation and functional functions. That is practical for documentation jobs like “just being able to write our weekly reports and things,” Taylor stated, however it likewise helps in notifying their total instructions.
“One of the important things that just recently I’ve been personally dealing with my soldiers is decision-making– specific decision-making,” Taylor said. “And how [we make decisions] in our own private life, when we make choices, it’s essential. That’s something I’ve been asking and attempting to develop designs to assist all of us. Specifically, [on] how do I make choices, individual choices, right– that impact not just me, however my company and total preparedness?”
That’s still a far cry from the Terminator vision of self-governing AI weapon systems that take deadly choices out of human hands. Still, utilizing LLMs for military decision-making may provide time out to anybody knowledgeable about the designs’ widely known tendency to confabulate phony citations and sycophantically flatter users.
Find out more
As an Amazon Associate I earn from qualifying purchases.