Human-centered evaluation of explainable AI applications: a systematic review
Planning Minister Sets AI Roadmap For Pakistan With New Plans
ISLAMABAD: Federal Minister for Planning, Development & Special Initiatives, Prof. Ahsan Iqbal, has announced a comprehensive AI roadmap for Pakistan, emphasizing an inclusive strategy to integrate artificial intelligence across the country's key development sectors. Chairing a session of the National Taskforce on Artificial Intelligence, the Minister highlighted that adopting AI is critical for national progress and must align closely with Pakistan's strategic goals.
Acknowledging the rapid pace of global investment in AI, Ahsan Iqbal stressed that Pakistan needs to move forward with clarity and shared purpose. He underscored that the AI roadmap for Pakistan cannot succeed in isolation but requires strong cross-sectoral collaboration, alignment with national priorities, and seamless coordination among all relevant institutions.
To operationalize this vision, the Taskforce was directed to identify twelve priority sectors, including education, health, agriculture, climate, business, and governance, where AI applications can deliver measurable impacts. Each of these sectors will form multi-stakeholder working groups featuring experts from government, academia, and the private sector to draft sector-specific AI plans with clear objectives, timelines, and resource needs.
National AI Fund and Mapping to Drive InnovationTo drive innovation and reduce financial constraints, Ahsan Iqbal announced plans for a National AI Fund to support high-potential ideas and pilot initiatives. The Minister also called for a nationwide mapping effort to catalog AI talent and infrastructure across universities, research centers, and industry to strategically deploy these resources.
As part of fostering dialogue and designing practical solutions, the Minister instructed the Taskforce to organize a national AI workshop in collaboration with PASHA and other industry stakeholders. This platform will bring together government, academia, and private sector leaders to shape AI applications tailored to Pakistan's unique development needs.
The meeting was attended by Minister of State for IT & Telecommunication, Ms. Shaza Fatima Khawaja; Dr. Yasar Ayaz of the National Center for AI; officials from the Ministry of IT, NADRA, PASHA; and key representatives from the private sector.
It is worth noting that under the Prime Minister's direction and Ahsan Iqbal's leadership, Pakistan has already taken major strides by setting up nine Centers of Excellence in AI, Big Data, Cloud Computing, Robotics, and Quantum Computing, along with launching the Quantum Valley Pakistan initiative to build capabilities in emerging technologies. These steps underscore Pakistan's commitment to securing a meaningful place in the global AI landscape through innovation, collaboration, and inclusive governance.
Innovating Defense: Generative AI's Role In Military Evolution
The emergence of generative artificial intelligence (AI) indicates a paradigm shift in military research and application, echoing the revolutionary scientific framework presented by Thomas Kuhn in his ground-breaking The Structure of Scientific Revolutions.1 This article delves into the profound implications and transformative potential of generative AI within the military sector, exploring its role as both a disruptive innovation and a catalyst for strategic advancement.2 In the evolving landscape of military technology, generative AI stands as a pivotal development, reshaping traditional methodologies and introducing new dimensions in strategy and tactics. Its ability to process vast amounts of data, generate predictive models, and aid in decision-making processes not only enhances operational efficiency but also presents unique challenges in terms of ethical deployment and integration into established military structures.
This article navigates through the complex terrain of generative AI in military settings, examining its impact on policymaking, strategy formulation, and the broader implications on the principles of warfare. As we stand at the cusp of this technological revolution, this article underscores the need for a balanced approach that harmonizes technological prowess with ethical considerations, strategic foresight, and a deep understanding of the evolving nature of global security dynamics. We aim to provide a comprehensive overview of generative AI's role in shaping the future of military strategy and its potential to redefine the contours of modern warfare.3
Definition of Generative Artificial IntelligenceGenerative AI has become a focal point in modern culture with the popularization of applications such as ChatGPT, Dall-E, and Midjourney. Both industry and academia have adopted its use in various innovative ways, adapting it to suit specific cases. Its computational nature streamlines the search for code syntax and helps create computer programs. Within the humanities, it can easily be used to generate written summaries on nuanced topics. Some applications can create images and even music. As an innovation, generative AI has "democratized access to Large Language Models" trained on the open-source internet; it specializes in producing "high quality, human-like material" for wide audiences.4 Before expanding upon the complex consequences of generative AI's growing popularity, the terminology must be defined. Generative AI refers to models that produce more than just forecasts, data, or statistics. Its models are used for "developing fresh, human-like material that can be engaged with and consumed."5
Generative AI is not a specific machine learning model but, rather, a collection of different types of models within data science. The most important differentiation is the output, which mimics the creativity and labor of human capital. Over these last couple of years, we have been lucky enough to experience one of the rare moments in time classified as a scientific revolution while society began adapting to the changes associated with generative AI in industry.
Military ApplicationsIn August 2023, the U.S. Military announced "the establishment of a Generative Artificial Intelligence task force, an initiative that reflects the Department of Defense's [DoD's] commitment to harnessing the power of artificial intelligence in a responsible and strategic manner."6 Task Force Lima, led by the Chief Digital and Artificial Intelligence Office (CDAO), has been tasked to assess and synchronize the use of AI across the DoD to safeguard national security. Current concerns about the management of training data sets are the primary focus. In time, DoD aims to employ generative AI "to enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy."7 Due to the nature of military operations, the DoD has released risk mitigation guidance to ensure that responsible statistical practices are combined with quality data to produce insightful analytics and metrics.8 For any military application, officials must consider the principals of "governability, reliability, equity, accountability, traceability, privacy, lawfulness, empathy, and autonomy" to establish ethical implementation during this transitive period.9
Prospective applications of generative AI include "Intelligent Decision Support Systems (IDSSs) and Aided Target Recognition (AiTC), which assist in decision-making, target recognition, and casualty care in the field;" each of these aims to reduce the mental load of operators and increase the accuracy of decisions in dangerous environments.10 Historically, the U.S. Military has implemented AI in "autonomous drone weapons/intelligent cruise missiles" and witnessed "robust results and reliable outcomes in complex and high-risk environments."11 Although the AI in those weapon systems does not necessarily rely on generative AI models, it showcases a promising ability to follow the foundational ethical principals in American governance. Figure 1 illustrates DoD's process of adopting AI into new warrior tasks. This system will replace previous practices to cultivate an improved data driven military.12
Futuristic applications of generative AI include the planning of routes, writing of operation orders, and formulating of memorandums. Furthermore, the defense industry has been working on "3D Generative Adversarial Networks" capable of "analyzing and constructing 3D objects."13 These models "become an increasingly important area to consider for the automation of design processes in the manufacturing and defense industry."14 As the role of creating military goods changes over time, leaders must shift their focus towards thinking deeper about problems and less about the labor process. They will need to develop critical-thinking skills that allow them to understand generative AI outputs based on data inputs to avoid ethical concerns that stem from statistical practices. Many companies in the United States have already faced ethical dilemmas resulting from statistical models, to include fatal crashes from self-driving cars to malpractice lawsuits in hiring techniques.15 Current generative AI models may not be trained on military data sets or have a poor understanding of nuanced military policy. This does not necessarily mean military personnel must refrain from using these platforms, but there is a social burden to take appropriate precautions. The recent breakthroughs of generative AI in the public market will gradually reach a point where it can be used for military applications; however, it must first address:
…1) high risks means that military AI-systems need to be transparent to gain decision maker trust and to facilitate risk analysis; this is a challenge since many AI-techniques are black boxes that lack sufficient transparency, 2) military AI-systems need to be robust and reliable; this is a challenge since it has been shown that AI-techniques may be vulnerable to imperceptible manipulations of input data even without any knowledge about the AI-technique that is used, and 3) many AI-techniques are based on machine learning that requires large amounts of training data; this is challenge since there is often a lack of sufficient data in military applications.16
The next era of military leaders must be aware of their new burden, and in time, officer education systems will shift to reflect these emerging roles.
Generative Artificial Intelligence as a Disruptive InnovationGenerative AI can be classified as a disruptive innovation in accordance with the framework presented in Clayton Christensen's The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. In his book, Christensen explains why great companies in established markets fail over time. The United States is the leading firm within the market of military power. Although this market is not monetarily based, every market experiences two types of technological change: sustaining and disruptive. Sustaining technology supports current market structures and is led by established firms seeking to satisfy current customers' needs. Disruptive technology, however, disrupts/redefines markets' preferences by finding strengths in historically undeveloped characteristics. It was in this aspect, the process of changing market dichotomies that "consistently resulted in the failure of leading firms."17 Established firms seek to develop new technology that appeals to their current market based on the existing value system.
History has witnessed the fluidity of battlefield technology (for example, the development of bows, rifles, machine guns, and tanks). Each of these advancements restructured warfare and, in some cases, upset the entire world order. For instance, take the fall of Russia in the 19th century during the Industrial Revolution. At the time Russia was a regional power, but it failed to industrialize as quickly as Germany and was unable to organize a strong military industry by World War 1. Ultimately, the failure to innovate led to heavy Russian losses on the eastern front to a technically superior, but much smaller, German army.18 Military value systems reflect what wins on the battlefield. Typically, leaders in established firms/countries overvalue historical approaches and fail to realize the potential of entrants (competing countries developing disruptive technology) in niche warfighting tasks until disruptive technology has advanced too far. Once disruptive technology redefines military value systems and operating procedures, it is too late for sustaining countries to catch up, and they are surpassed on the global stage.
Disruptive technology is dangerous to established firms because there is "considerable upward mobility into other networks" while the market "is restraining downward."19 The essential idea here is that disruptive technology starts off marketing itself to customers with limited resources yet grows until it can steal bigger contracts. Large firms' managers often have a difficult time justifying "a cogent case for entering small, poorly defined lower end markets that offer only lower profitability."20 Within warfare, this is due to superpowers' need to focus on the upmarket value networks, or rather, the connections/transactions between their territories and the current largest threats to national security. Imagine the President of the United States asking Congress in the mid-2010s to invest heavily in developing generative AI, a product that had no predictable application, rather than focusing on the war in Afghanistan. In hindsight, it would have been a great way to increase the American lead in military power, but until the Russo-Ukrainian War in 2022, perhaps no one could have envisioned the impact of AI in producing kill chains (the concept of identifying targets, dispatching forces, attacking, and destroying said targets). This war has served as a great innovator, notably for autonomous drones that can use satellite imagery and image recognition software to identify hostiles.21 These drones communicate with larger servers and drop explosives on the targets, vastly accelerating kill chains compared to historical operating procedures that required gathering intelligence, deploying forces, and warfighting.22 The Chinese Communist Party has heavily invested in AI capabilities and aims to be the world leader by the mid-2030s, exemplifying America's newfound military competition due to this disruptive technology.
While disruptive entrants take technology as a given and operating procedures as variable, sustainers see the opposite with operating procedures as fixed and technology as variable. In order to maintain success, military countries abandon niche practices and focus on maintaining the status quo. Rational managers in established countries do not have the luxury or need for risk. In time, the fluctuations of warfare create a cycle as countries uproot power structures, establish governance systems, and are eventually usurped by innovative conquerors. The key to remaining upmarket — a successful superpower — requires established countries to adopt practices to manage disruptive change. Large militaries will experience difficulty field testing emerging technology, so it is a good practice to establish external research teams. These smaller organizations will not expect great results; their key task must instead be to find organizational knowledge to build projects upon. It is impossible to predict the fluidity of warfare, so militaries must actively stay on guard.
The establishment of Task Force Lima is a key example of the United States managing the disruptive nature of generative AI within the military market.23 Christensen recommends three main strategies for established firms to overcome disruptive change. One such strategy would be pouring resources into new markets to make them more profitable, essentially affecting growth rates of emerging markets. Companies may instead elect to wait until the emerging market is already defined and intervene as soon as an opportunity presents itself. Lastly, to handle disruptive change, some companies may place all responsibility on commercializing disruptive technologies in small, outside organizations.24 DoD has been forced to utilize the latter option. A failure to manage AI within the military domain would result in a similar decline in power as Russia faced in the 19th century. The American military seeks to create new capabilities for utilizing small teams outside of existing processes and values to lead innovation, avoid security crises, and withstand warfare changes.
Generative AI in Military StrategyIn the context of military policy and warfighting, the rise of generative AI significantly impacts the strategic and operational frameworks of defense organizations. The integration of this technology into military applications necessitates a nuanced approach to policymaking, blending scientific understanding with ethical and strategic insights from the humanities. C.P. Snow, renowned author of The Two Cultures, aimed to explain the historical divide between humanitarian and natural science studies in British society.
He stated that prior to the Industrial Revolution the societal elite historically educated their youth through reading and writing to teach them the ways of governance, mostly through the subjects of philosophy, law, and English.25 The Industrial Revolution introduced another domain of study — applied sciences — that gave the lower and middle class a new route to improve their own lives through the harnessing of the natural world. Snow's general idea was that most humans sought to improve their condition through the Industrial Revolution, which finally afforded the study of sciences to be applied to everyday life. Over time they increased their studies to benefit industrialization, while the elite remained focused on matters of literature and governance. The lasting split in academia between the two cultures was exasperated in government through its lack of communication with industry.
The application of generative AI in military contexts, such as autonomous weapon systems and decision support tools, requires policies that balance technological capabilities with ethical considerations, including international humanitarian law and the rules of engagement. Governing bodies in America and internationally, such as the United Nations, have found it difficult to regulate advanced cyber operations. Now, with the introduction of advanced statistical models, it is imperative that decision makers understand the implications of using them and the impacts within society based off the models and training data used. Generative AI introduces new dimensions in warfighting tactics, from automated target recognition to intelligence analysis. Military strategies must evolve to incorporate these AI-driven capabilities while considering their implications on battlefield ethics and soldier safety. Failed recognition could result in civilian casualties and infrastructure destruction if not properly managed. The integration of AI in military operations necessitates reforms in military education and training. This includes incorporating interdisciplinary studies that blend technology with ethics, philosophy, and military strategy, thus preparing Soldiers and commanders for AI-augmented warfare. The U.S. Army is pivoting towards merging the two cultures by cultivating data-competent leaders who won't have to rely on analysts to garner insights.26
The primary challenge lies in integrating AI capabilities into existing military structures and operations. This requires not only technological adaptation but also doctrinal and strategic shifts. Perhaps the worst thing that could happen is the widening of the cultural gap, as technologists flee to industry and away from government roles. If integrated well into operations, the use of AI offers opportunities for enhanced operational capabilities, such as improved situational awareness, faster decision-making, and more accurate targeting, contributing to the overall effectiveness of military operations. Generative AI redefines the character of warfare and security, posing new questions about the nature of conflict, the role of human soldiers, and the future of international security dynamics. Failure to legislate and implement AI in a timely manner will certainly result in the abuse of highly lethal AI kill chain systems by hostiles unbounded by ethical considerations.
The integration of generative AI into military policy and warfighting presents both challenges and opportunities. It necessitates a new paradigm in military strategy and policymaking, one that harmonizes the advancements in AI with the ethical, strategic, and human aspects of warfare. As military organizations adapt to this AI-driven landscape, the collaboration between technical experts and strategists becomes crucial in shaping effective, ethical, and sustainable military policies and practices.
ConclusionGenerative AI is a disruptive innovation that will completely restructure the military industry. In real time, we are experiencing one of the greatest scientific revolutions in the history of mankind. If you are not convinced, in order to illustrate the astonishing advancements of generative AI, go back and reread the introduction: It was written by ChatGPT 4 after training it on this article, which took approximately 30 seconds. This type of technology was unimaginable only a few years ago, just like the incredibly lethal kill chains in Ukraine. Within the next five years, untraceable amounts of extraordinary science will continue to occur until both military and industry have compartmentalized generative AI's capabilities. Until then, policymakers must continue to exercise caution while implementing AI in warfare and communicate across the cultural gap with scientists who can explain the inner workings of these complex models. The world may be in the midst of great ambiguity as we hold our breath to see what great weapons will emerge from this unprecedented revolution, but at least one thing is certain, by the end of this the world will surely be changed forever.
Notes1 Thomas S. Kuhn, The Structure of Scientific Revolutions, 4th ed. (Chicago: The University of Chicago Press, 2012).
2 Disruptive innovation is outlined in Clayton M. Christensen's The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail (Boston: Harvard Business Review Press, 2013).
3 ChatGPT 4 was used to manufacture the introduction as well as various subsections of the article to synthesize sentences that were edited and then implemented. The graphic on page 60 was created using Adobe Firefly.
4 Francisco Garcia-Penalvo and Andrea Vazquez-Ingelmo, "What Do We Mean by GenAI? A Systematic Mapping of the Evolution, Trends, and Techniques involved in Generative AI," International Journal of Interactive Multimedia and Artificial Intelligence (December 2023), https://www.Ijimai.Org/journal/sites/default/files/2023-07/ip2023_07_006.Pdf.
5 Ibid.
6 Department of Defense, "DoD Announces Establishment of Generative AI Task Force," 10 August 2023, https://www.Defense.Gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/.
7 Ibid.
8 Department of Defense, "Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy," 27 June 2023, https://media.Defense.Gov/2023/nov/02/2003333300/-1/-1/1/dod_data_analytics_ai_adoption_strategy.Pdf.
9 David Oniani, Jordan Hilsman, Yifab Peng, Ronald K. Poropatich, Jeremy C. Pamplin, Gary L. Legault, and Yanshan Wang, "Adopting and Expanding Ethical Principles for Generative Artificial Intelligence from Military to Healthcare," npj Digital Medicine 6/11 (December 2023): 1-10.
10 Ibid.
11 Ibid.
12 DoD, "Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy."
13 Michael Arenander, "Technology Acceptance for AI Implementations: A Case Study in the Defense Industry about 3D Generative Models," (Master of Science thesis, KTH Royal Institute of Technology, 2023).
14 Ibid.
15 Daniel Wu, "A Self-Driving Uber Killed a Woman. The Backup Driver Has Pleaded Guilty," Washington Post, 31 July 2023, https://www.Washingtonpost.Com/nation/2023/07/31/uber-self-driving-death-guilty/; Jeffrey Dastin, "Insight – Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women," Reuters, 10 October 2018, https://www.Reuters.Com/article/idUSKCN1MK0AG/.
16 Dr. Peter Svenmarck, Dr. Linus Luotsinen, Dr. Mattias Nilsson, and Dr. Johan Schubert, "Possibilities and Challenges for Artificial Intelligence in Military Applications," Swedish Defence Research Agency, Stockholm, Sweden, 2023, https://www.Sto.Nato.Int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-160/MP-IST-160-S1-5.Pdf.
17 Christensen, The Innovator's Dilemma, 24.
18 Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton & Company, 2023).
19 Ibid., 24.
20 Ibid., 72.
21 Scharre, Four Battlegrounds.
22 Ibid.
23 DoD, "DoD Announces Establishment of Generative AI Task Force."
24 Christensen, The Innovator's Dilemma, 107.
25 C.P. Snow, The Two Cultures (Cambridge: Cambridge University Press, 2012).
26 Erik Davis, "The Need to Training Data-Literate U.S. Army Commanders," War on the Rocks, 17 October 2023, https://warontherocks.Com/2023/10/the-need-to-train-data-literate-u-s-army-commanders/.
2LT Andrew P. Barlow is currently a student in the Infantry Basic Officer Leader Course at Fort Benning, GA. He graduated from the U.S. Military Academy (USMA) at West Point, NY, with a double major in operations research and economics.
Cadet Allison Bender is currently attending USMA (Class of 2026) and majoring in operations research.
This article appears in the Summer 2025 issue of Infantry. Read more articles from the professional bulletin of the U.S. Army Infantry at https://www.Benning.Army.Mil/Infantry/Magazine/ or https://www.Lineofdeparture.Army.Mil/Journals/Infantry/.
As with all Infantry articles, the views herein are those of the authors and not necessarily those of the Department of Defense or any element of it.
AI Is Fueling Mergers. Here Are Two That Make Sense.
For HPE, the fit is particularly good. Juniper, a networking equipment firm, puts it in a much better spot to compete against Cisco Systems and Nvidia in the market for the high-speed networking equipment that is required in AI data centers.
HPE shares rose 12.6% on Monday, making it the top performing stock in the S&P 500 on the day. The deal officially closed on Wednesday.
Success breeds imitation, and there may be other legacy tech companies looking to the merger market to improve their AI position. I'm not an investment banker, but here are some deals I wouldn't be surprised to see—and that could get a good reception from the market.
Oracle and C3.Ai: Oracle is a prime candidate to add AI to its software through an acquisition.
Oracle has already begun transforming itself for the AI age. With perfect timing in 2020, Oracle began running the Microsoft playbook: Transform from a legacy software maker into a cloud company. It has been moving customers to cloud-based versions of its software with annual subscriptions, while at the same time building large data centers to rent out cloud servers for AI and traditional workloads. In fiscal 2025, revenue from the cloud was up 24%, while the rest of Oracle was flat on the year.
Ironically for a software company, Oracle's AI play to date has largely been in hardware: AI cloud servers. It could use an AI software merger that, grafted on to existing Oracle offerings, would leverage the mountains of proprietary data customers have in Oracle databases.
Enter C3.Ai. C3's offerings would fit nicely on top of Oracle's software. C3 has 130 ready-made AI applications tailored for different industries, solving problems and helping to predict outcomes. Today, its customers are clustered in energy, manufacturing, government, and the military.
C3's revenue grew by 25% to $389 million in fiscal 2025, but it posted a $289 million loss. The culprits were sky-high expenses, 183% of revenue. But in a merger, sales and administrative expenses, which together represented 86% of revenue, would be trimmed heavily once integrated into Oracle's large sales force and bureaucracy.
After a sharp selloff this year, C3 has a $4.2 billion market capitalization. Oracle could use its $11 billion in cash, or its stock, which trades at a premium to its historical price/earnings ratio for the next 12 months. Paying with cash could hamper Oracle's plans for data center investment, coming in at $21.2 billion last year, tripled from the year before, and future plans may require more debt, now at $109 billion. In the end, competing capital requirements may be the largest hurdle for this merger.
An Oracle/C3.Ai merger would face one obstacle right off the bat. The company's founders have a history. C3 CEO Tom Siebel was among Oracle's early employees, and became a top salesperson. In the early 1990s, Larry Ellison, Oracle's chairman, rejected Siebel's idea for a new product. Siebel left Oracle and took his idea to form Siebel Systems, which became successful.
As Siebel's software got traction, a long war of words emerged between the two. In the end, Ellison and Siebel made up enough to agree to an acquisition, with Oracle paying $5.85 billion for Siebel Systems in 2005. Another deal would make sense.
Check Point Software and SentinelOne. Check Point Software Technologies is a pioneer in cybersecurity. Its software builds a wall around corporate networks, but increasingly, workers are doing things outside those walls, like working from home or using cloud applications. Check Point also has a cloud-security product that is mature and integrated with its network security.
SentinelOne's strength is AI-driven "endpoint security," proactively protecting employee devices no matter what network they are connected to. Check Point has its own endpoint solution, but SentinelOne's is a more popular product.
Check Point grew revenue by 6% last year, a much slower pace than 20 years ago, but it also generates a lot of free cash flow, which has largely gone to share repurchases. Check Point has reduced its share count by 55% since 2005, becoming a classic "value" play. Now it could turn back to growth.
SentinelOne is a much younger company, growing smartly. Revenue rose by 32% last fiscal year, but, like C3.Ai, it took a huge loss, with total expenses at 140% of revenue. Some 82% of its revenue went to sales and administrative costs, the kind of expenses that would be greatly reduced after a merger, also thanks to Check Point's larger scale and existing sales force.
SentinelOne has a market capitalization of $6.8 billion, and the stock is down 12% over the past year, versus a gain of 12.5% for S&P 500.
The bet here is that a combined offering covering network, cloud, and endpoint security would allow Check Point to upsell its existing base of over 100,000 customers. Check Point had just $1.5 billion in cash and short-term investments at the end of March. The deal could be all-stock, or Check Point, which has no debt, could borrow to finance the purchase.
As with any deal, integration is expensive, so shareholders would need to be patient. It could be worth the wait.
Write to Adam Levine at adam.Levine@barrons.Com

Comments
Post a Comment