Artificial intelligence in the context of digital marketing communication



ai in defence :: Article Creator

This Man Was Killed Four Years Ago. His AI Clone Just Spoke In Court.

Get the Popular Science daily newsletter💡

People just can't stop using generative AI tools in legal proceedings, despite repeated pushback from frustrated judges. While AI initially appeared in courtrooms through bogus "hallucinated" cases the trend has taken a turn—driven by increasingly sophisticated AI video and audio tools. In some instances, AI is even being used to seemingly bring victims back from the dead.

This week, a crime victim's family presented a brief video in an Arizona courtroom depicting an AI version of 37-year-old Chris Pelkey. Pelkey was shot and killed in 2021 in a road rage incident. Now, four years later, the AI-generated "clone" appeared to address his alleged killer in court. The video, first reported by local outlet ABC15, appears to be the first known example of a generative AI deepfake used in a victim impact statement.

"To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances," the AI replica of Pelkey says in the video. "In another life, we probably could have been friends."

The video shows the AI version of Pelkey—a burly, bearded Army veteran—wearing a green hoodie and gray baseball cap. Pelkey's family reportedly created the video by training an AI model on various clips of Pelkey. An "old age" filter was then applied to simulate what Pelkey might look like today. In the end, the judge sentenced Horcasitas to 10.5 years in prison for manslaughter, a decision he said was at least partly influenced by the AI-generated impact statement.

"This is the best I can ever give you of what I would have looked like if I got the chance to grow old," the Pelkey deepfake said. "Remember, getting old is a gift that not everybody has, so embrace it and stop worrying about those wrinkles."

A New York man used an AI deepfake to help argue his case 

The AI-generated impact statement comes just a month after a defendant in New York State court, 74-year-old Jerome Dewald, used a deepfake video to assist in delivering his own legal defense. When Dewald appeared in court over a contract dispute with a former employer, he presented a video showing a man in a sweater and blue dress shirt speaking directly to the camera. The judge, confused by the video, asked Dewald if the person on screen was his attorney. In reality, it was an AI-generated deepfake.

"I generated that," Dewald said according to The New York Times. "That is not a real person."

The judge wasn't pleased and reprimanded Dewald for failing to disclose that he had used AI software to aid his defense. Speaking with the NYT after the hearing, Dewald claimed he hadn't intended to mislead the court but used the AI tool as a way to more clearly articulate his defense. He said he initially planned to have the deepfake resemble himself but switched to the version shown in court after encountering technical difficulties.

"My intent was never to deceive but rather to present my arguments in the most efficient manner possible," Dewald reportedly said in a letter to the judges. 

Related: [This AI chatbot will be playing attorney in a real US court]

AI models have 'hallucinated' fake legal cases

The two cases represent the latest examples of generative AI seeping into courtrooms, a trend that began gaining traction several years ago following the surge of public interest in popular chatbots like OpenAI's ChatGPT. Lawyers across the country have reportedly used these large language models to help draft legal filings and collect information. That has led to some embarrassing instances where models have "hallucinated" entirely fabricated case names and facts that eventually make their way into legal proceedings.

In 2023, two New York-based lawyers were sanctioned by a judge after they submitted a brief containing six fake case citations generated by ChatGPT. Michael Cohen, the former personal lawyer of President Donald Trump, reportedly sent fake AI-generated legal cases to his attorney that ended up in a motion submitted to federal judges. Another lawyer in Colorado was suspended after reportedly submitting AI-generated legal cases. OpenAI has even been sued by a Georgia radio host who claimed a ChatGPT response accused him of being involved in a real embezzlement case he had nothing to do with. 

Get ready for more AI in courtrooms 

​​Though courts have punished attorneys and defendants for using AI in ways that appear deceptive, the rules around whether it's ever acceptable to use these tools remain murky. Just last week, a federal judicial panel voted 8–1 to seek public comment on a draft rule aimed at ensuring that AI-assisted evidence meets the same standards as evidence presented by human expert witnesses. Supreme Court Chief Justice John Roberts also addressed the issue in his 2023 annual report, noting both the potential benefits and drawbacks of allowing more generative AI in the courtroom. On one hand, he observed, AI could make it easier for people with limited financial resources to defend themselves. At the same time, he warned that the technology risks "invading privacy interests and dehumanizing the law."

One thing seems certain: We haven't seen the last of AI deepakes in courtrooms.

  More deals, reviews, and buying guides The PopSci team has tested hundreds of products and spent thousands of hours trying to find the best gear and gadgets you can buy.  

Mack DeGeurin is a tech reporter who's spent years investigating where technology and politics collide. His work has previously appeared in Gizmodo, Insider, New York Magazine, and Vice.


DEVCOM Analysis Center: Advancing Reliability In Defense Systems

ABERDEEN PROVING GROUND, Md. — Ensuring the reliability of military systems is a critical mission for the U.S. Army, and at the forefront of this effort is the U.S. Army Combat Capabilities Development Command (DEVCOM) Analysis Center (DAC). As the Army's Center for Reliability Growth (CRG), DAC plays a vital role in system evaluation, test design and acquisition by providing cutting-edge tools and expertise to enhance the performance and durability of defense technologies.

"Reliability is the backbone of mission success," says Nathan Herbert, a reliability analyst at DAC. "Our work ensures that when a system is deployed, whether it's an autonomous vehicle, a weapons platform, or a sensor system, it performs as expected under the required conditions."

Pioneering AI Reliability in Defense

As the military increasingly integrates Artificial Intelligence (AI) into its operations, ensuring AI reliability is one of DAC's priorities. AI offers transformative potential for decision-making and autonomous functions, but it also introduces new risks that require rigorous assessment.

DAC addresses these challenges through failure mode analysis, risk assessment tools and partnerships with defense organizations. The DoD's Responsible AI Initiative underscores the importance of reliability, safety and mission effectiveness in AI applications.

"We are developing methodologies to understand how AI systems fail and how to mitigate those failures," Herbert explained. "From model training limitations to human-machine breakdowns to adversarial attacks, we need robust design and testing to ensure AI can operate reliably in real-world scenarios."

Challenges in Autonomous Systems

One area of particular concern is the reliability of autonomous ground vehicles and robotic systems. DAC engineers have identified key areas of concern, including:

  • Obstacle Avoidance Issues – Autonomous systems could struggle to differentiate between significant obstacles like trees, and minor debris like leaves, leading to unnecessary path changes.
  • Identification Failures – AI models sometimes misclassify objects due to incomplete or obscured imagery.
  • Environmental Challenges – Robotic dogs, for instance, have difficulty navigating through tall grass, affecting their movement.
  • Teaming Breakdowns – Ineffective human-machine interactions may degrade AI system performance and the user's situational awareness.
  • Adversarial Attacks – AI systems are vulnerable to manipulation through tactics like model poisoning, hacking and camouflage techniques that trick detection algorithms.
  • Navigation Exploits – AI-dependent vehicles can be misdirected by subtle environmental modifications, such as deceptive road markings.

    According to Herbert, addressing these challenges requires a multi-layered approach that includes cybersecurity, operational context analysis and continuous system monitoring. "Trust in autonomous systems is only possible if we can ensure their resilience to both expected and unexpected conditions," says Herbert.

    Innovative Tools for AI Reliability

    To enhance AI reliability, DAC has developed tools like the "failure mode wheel," an interactive platform that allows engineers to analyze potential failure points in AI systems. Additionally, DAC has introduced an AI reliability scorecard, adapted from traditional reliability assessments to systematically evaluate factors such as model selection, data quality and configuration management.

    "This scorecard helps us take a structured approach to AI reliability," Herbert noted. "It ensures that we are considering all critical aspects of AI development and deployment, from initial training to life cycle management."

    Shaping the Future of Defense Reliability

    Beyond AI, DAC continues to drive reliability advancements in hardware and electronic systems, improving durability and reducing life cycle costs. Its contributions to reliability standards and collaboration with external agencies strengthen the center's role as a key player in defense system evaluation.

    As DAC forges ahead, its focus remains clear: to ensure that the Army's most advanced technologies function reliably when they are needed most. "Our goal is to provide Warfighters with the confidence that their systems will perform when needed," Herbert said. "That's what reliability is all about."

    ---------------------------

    The Center for Reliability Growth models and the interactive AI failure mode wheel are available for free in the DAC reliability dashboard at https://apps.Dse.Futures.Army.Mil/RelToolsDashboard/. Users must have a Department of Defense Common Access Card (CAC) and government network connection to access it. If you do not meet those requirements, DAC can instead send you the legacy versions of the models and tools, upon approval.

    The U.S. Army Combat Capabilities Development Command (DEVCOM) Analysis Center (DAC) informs Army transformation and readiness decisions across the system's life cycle through objective integrated system-level Analysis, and the development of credible Data and Analytic Tools. DAC is one of seven Centers within DEVCOM. Visit the DEVCOM website for more information.


    What Role Can AI Play In Securing Sensitive Defense And Manufacturing Sites?

    Sections

    Defense News Logo Air WarfareLandNavalSpaceCyberOpens in new windowC4ISROpens in new windowPentagonCongressGlobalVideoThought Leadership
  • Subscribe Now
  • Air Warfare
  • Land
  • Naval
  • Pentagon
  • Congress
  • Budget
  • CyberOpens in new window
  • C4ISROpens in new window
  • Space
  • Training & Sim
  • Unmanned
  • Global
  • Asia Pacific
  • Europe
  • Mideast Africa
  • The Americas
  • Industry
  • MilTech
  • Interviews
  • Opinion
  • Top 100 Companies
  • Video
  • Defense News Weekly
  • Money Minute
  • Outlook
  • Thought Leadership
  • Whitepapers & eBooksOpens in new window
  • DSDs & SMRsOpens in new window
  • WebcastsOpens in new window
  • EventsOpens in new window
  • NewslettersOpens in new window
  • Events Calendar
  • Native
  • Early Bird Brief
  • Digital EditionOpens in new window
  • What role can AI play in securing sensitive defense and manufacturing sites? What role can AI play in securing sensitive defense and manufacturing sites? Pattern recognition, digital clues and advance surveillance - looking through the ways AI can aid companies in keeping an eye on their facilities.

    8 days ago

    Latest Videos Securing sensitive facilities in modern timesDefense News Weekly Full Episode 5.3.25 How can I start planning for retirement today? — Money Minute How AI can help predict threats before they happen at defense facilities The basics of securing sensitive sites in a time of evolving tech A thirteen-digit defense budget? What $1 trillion dollars could do for the military Cutting costs or adding cash? Pentagon looks to have it both ways Funding goals meet congressional reality: operating on a temporary budget A trillion-dollar defense budget?Defense News Weekly Full Episode 4.26.25 Republican leader talks VA staffing cuts and vets' careDefense News Weekly Full Episode 4.19.25 How to reimagine your finances during difficult times — Money Minute VA staffing cuts won't hurt care says House Veterans' Committee Chairman House Veterans' Chair vows tight oversight on budget as VA eyes efficiency Veterans' Affairs chairman sees friction, opportunity with Democratic lawmakers VA's plans for staffing cuts won't endanger veterans' services, House Veterans' Chairman says See Cummings Aerospace's Hellhound loitering munition fly in test in Oregon Trending Now
  • Army names newly combined futures and training command
  • Upgrades ahead across the special operations helicopter fleet
  • General Atomics touts UK breakthrough in drone airspace integration
  • Thousands of machinists strike at jet engine maker Pratt & Whitney
  • Northrop tests tech to help hypersonic vehicles maneuver without GPS
  • Facebook pageTwitter feedRSS feed

    Defense News © 2025

    Terms of Use Get Us Contact Us About Us




    Comments

    Follow It

    Popular posts from this blog

    What is Generative AI? Everything You Need to Know

    Top Generative AI Tools 2024

    60 Growing AI Companies & Startups (2025)