10 top AI jobs in 2025
AI Must Do More Than Predict: New Framework Pushes For Context-aware Transparency
Artificial Intelligence (AI) is transforming decision-making across industries, from finance to healthcare. However, the widespread adoption of AI-driven systems has raised concerns about transparency, interpretability, and trust. AI models, particularly those used in high-stakes environments, often function as "black boxes", making it difficult for users to understand how decisions are made. This lack of explainability can hinder adoption and lead to resistance from stakeholders.
A recent study, "Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design," authored by Eleanor Mill, Wolfgang Garn, and Chris Turner, and published in Applied Artificial Intelligence (2024), explores an innovative approach to explainable AI (XAI). The research introduces the SAGE framework (Settings, Audience, Goals, and Ethics), which aims to contextualize AI explanations based on real-world requirements. By integrating scenario-based design (SBD), the study demonstrates how AI models can be tailored to specific user needs, enhancing trust and usability.
The SAGE framework: Bridging AI explainability gapsTraditional AI models often prioritize performance over transparency, leading to explanations that are either too technical for non-experts or insufficient for real-world applications. The SAGE framework was developed to bridge this gap by structuring AI explanations around four key dimensions:
The study highlights that most current XAI models lack real-world usability because they do not consider these contextual factors. By applying the SAGE framework, researchers were able to fine-tune AI explanations to align with user needs, ultimately making AI-driven decisions more interpretable and trustworthy.
Scenario-based design for real-world AI applicationsTo test the real-world efficacy of explainable AI, the study focused on fraud detection in financial institutions. Fraud analysts frequently rely on AI models to flag suspicious transactions, but they often struggle to understand why a transaction is considered fraudulent.
Using scenario-based design (SBD), the study created a realistic fraud investigation workflow with a fictional fraud analyst, "Patrick." The scenario outlined Patrick's daily tasks, the decision-making challenges he faced, and the types of explanations he required to effectively perform his job. This approach helped researchers align AI explanations with actual industry workflows, ensuring that the model's outputs were useful, interpretable, and aligned with operational demands.
For instance, the fraud detection model needed to:
By embedding realistic user requirements into AI model development, the study demonstrated how SBD enhances the practicality of XAI solutions.
Selecting the right XAI model: TreeSHAP for fraud detectionChoosing an appropriate explainability method is crucial in high-stakes applications like fraud detection. The study evaluated several XAI techniques, including SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations).
After a detailed comparison, TreeSHAP was selected as the most suitable model. Unlike traditional SHAP methods, TreeSHAP is optimized for tree-based models (such as random forests and gradient boosting machines), making it ideal for fraud detection. It offers:
However, TreeSHAP had two key limitations:
To address these issues, the researchers enhanced TreeSHAP by integrating:
These improvements significantly boosted trust and usability, ensuring that AI-generated fraud alerts were both actionable and transparent.
Implications for AI adoption and future researchThe study provides valuable insights into how explainable AI can be practically integrated into real-world decision-making. By applying the SAGE framework and scenario-based design, AI models can be tailored to better serve human decision-makers, reducing resistance to adoption and improving trust.
One of the key takeaways is that AI explanations should not be one-size-fits-all. Instead, they must be adaptable to different industries, user expertise levels, and ethical considerations. The research also underscores the importance of faithfulness in AI explanations, as users need to know how much confidence they can place in an AI-generated decision.
Future research should focus on:
The study by Mill, Garn, and Turner presents a major advancement in explainable AI, demonstrating how the SAGE framework and scenario-based design can bridge the gap between technical AI explanations and real-world usability. By focusing on user-centric design, AI models can become more trustworthy, transparent, and effective in high-stakes environments like fraud detection.
As AI adoption continues to expand, ensuring explainability and accountability will be critical. This research lays a strong foundation for more interpretable, user-aligned AI systems, paving the way for greater public and industry trust in AI-driven decision-making.
Explainable Artificial Intelligence (XAI) For Brain Tumor Diagnosis
AI enhances brain tumor diagnosis using DNA methylation, improving accuracy and identifying new therapeutic targets with explainable AI and Random Forest models. Highlights:
Advertisement
AI-Based Brain Tumor Diagnosis Traditional diagnostic methods depend on histopathology, but with recent advancements in genomics, machine learning (ML) has become an important tool in analyzing genomic data and accurate tumor classification. However, the decision-making process of these ML models is not validated.The study developed an explainable artificial intelligence (XAI) framework designed to enhance the interpretability of DNA methylation-based brain tumor classification.
Advertisement
DNA Methylation and Tumor Classification DNA methylation is an epigenetic modification that influences gene expression and plays an important role in brain tumor biology. The Heidelberg brain tumor classifier, an ML-based model, utilizes genome-wide DNA methylation profiles to classify over 100 different molecular brain tumor types.However, the black-box nature of ML models is not fully understood about how these classifications are made. The newly developed XAI framework aims to identify the specific DNA methylation patterns for classification decisions.
Advertisement
AI Framework for Brain Tumor Diagnostics The XAI framework uses the Random Forest (RF) algorithm to analyze methylation data. RF has multiple decision trees, where each tree selects the most specific and informative features to distinguish between tumor types.By analyzing probe usage within these decision trees, it identifies key DNA methylation sites responsible for classification. This method enables both global and local interpretability, identifying patterns across tumor types and unique biomarkers for individual tumor classes.
DNA Methylation: A Key to Tumor Treatment The study found that tumors are distinguished by methylation patterns in enhancers, CpG islands, and heterochromatic domains. These regions regulate gene expression and tumor development, making them potential targets for further study.Many genes share similar methylation patterns across different tumor classes, enhancing their accuracy and increasing the reliability of diagnoses. Understanding methylation patterns can help identify new therapeutic targets. For example, the RET oncogene is highly expressed in specific tumor classes, making it a potential target for precision medicine.
This interpretable AI can be adapted to future versions of brain tumor classifiers and extended to other cancer types, including sarcomas and meningiomas. It also has potential applications in liquid biopsies, enabling early cancer detection and monitoring.
Reference:
Reimagining TechOps: Generative AI's Impact On Data And Operations
Sandeep Shilawat is a renowned tech innovator, thought leader and strategic advisor in U.S. Federal markets.
gettyIn a previous article, we explored how generative AI (GenAI) is transforming various branches of technology operations (TechOps). As we delve deeper, it becomes evident that strong data discipline is key to successfully leveraging AI in this space.
In this article, we'll examine specific AI tactics and their applications across different areas of TechOps, showcasing their transformative potential. Below are a few key ways TechOps is being transformed by GenAI:
• Data Preparation: Streamlining data cleaning, organization and structuring
• Predictive Maintenance: Anticipating system failures to minimize downtime
• Anomaly Detection: Identifying irregularities in systems and data
• Incident Automation: Streamlining workflows and reducing manual intervention
• Customer Support Bots: Providing automated, efficient customer service
Let's discuss how each of these is impacting TechOps.
Operations Data Preparation: A Foundation For SuccessEffective data preparation is arguably the most critical factor for successful generative AI applications in hybrid cloud environments. Generative AI excels at automating data cleaning, organizing and structuring—tasks that would otherwise consume significant time and resources.
• Data Cleaning: AI identifies and resolves anomalies, reducing errors and inconsistencies.
• Data Organization: Automation streamlines data generation and entity identification.
• Data Structuring: AI automates schema generation and enforcement for consistency across datasets.
By reducing noise and missing values, AI simplifies data imputation, categorization and clustering, improving data accessibility. Automated workflows drive continuous enhancement, with reporting, documentation and visualization tools accelerating analysis and decision-making. Leading cloud providers, including Azure Data Factory and Google Cloud Dataprep, offer AI-driven tools that can improve efficiency in hybrid cloud environments.
Predictive Maintenance: Proactive Problem-SolvingPredictive maintenance is another notable application of GenAI in TechOps. By analyzing historical data, AI can forecast potential equipment failures, reducing downtime and operational disruptions.
Key steps in implementing predictive maintenance in a hybrid cloud environment include:
• Data Collection And Curation: Historical data, including IoT sensor readings (e.G., temperature, vibration), is gathered and prepared.
• Data Preprocessing: Outliers are removed and missing values are filled to ensure data quality.
• Model Training: Generative models, such as recurrent neural networks (RNNs), are trained to identify patterns associated with equipment failures.
• Real-Time Monitoring: Once deployed in the cloud, these AI models continuously adapt to incoming data, enabling real-time performance monitoring and proactive alerts for potential failures.
• Decision Support Systems: AI-powered systems propose prioritized maintenance tasks based on predicted failure likelihood, optimizing resource allocation and scheduling.
By following these steps within a hybrid cloud framework, organizations can better ensure operational efficiency and minimal equipment downtime.
Anomaly Detection In Operations ManagementOperational anomalies often indicate potential issues that, if left unaddressed, could escalate into systemic failures. GenAI models excel at identifying these anomalies, allowing organizations to take preemptive action. Solutions such as IBM's Watson Studio and Amazon SageMaker leverage generative models to detect unusual patterns, improving reliability and operational stability.
Automating Problems And IncidentsIncident detection and resolution can be automated through event management systems that leverage real-time data. AI-powered solutions help integrate applications and reduce manual workload, improving efficiency and productivity. By automating complex processes, organizations can streamline incident management, ensuring faster response times and improved system resilience.
Customer Support For OperationsGenerative AI has transformed customer support operations through advanced chatbots and virtual assistants. Platforms such as Amazon Lex and Google's Dialogflow enable AI-driven systems to handle routine inquiries efficiently, allowing human agents to focus on more complex issues.
Major enterprises are increasingly automating contact centers with AI chatbots, enhancing customer satisfaction, optimizing resource allocation and reducing operational costs.
Ethical Considerations And Future OutlookAs generative AI becomes increasingly integrated into TechOps, a range of ethical concerns arises, including bias, fairness, privacy and security. Organizations must ensure AI models are trained on diverse and representative datasets to minimize bias and deliver fair outcomes. Additionally, robust data security and privacy measures are essential to safeguard sensitive information.
Generative AI is poised to play an even more significant role in hybrid cloud environments within TechOps. Its ability to leverage foundational models—fine-tuned for specific tasks—combined with explainable AI, offers transparency into how decisions are made. Organizations that understand and work to overcome these concerns can position themselves to unlock greater efficiency with the strategic use of generative AI.
Next StepsEvery enterprise should start by establishing a strong knowledge management framework for TechOps. Once this foundation is in place, businesses can deploy GenAI and large language models (LLMs) to automate standard operating procedures (SOPs) using conversational chatbots. Companies can either work with existing vendors or develop custom AI agents to make operational data machine-readable.
Additionally, organizations should invest in staff training on GenAI and LLMs to ensure effective adoption. After mastering core TechOps practices and establishing a steady operational rhythm, more advanced GenAI tools can be introduced for areas like SecOps, DataOps and FinOps. In the near future, specialized AI agents tailored to each operational domain may become available.
Integrating these AI agents within TechOps can enhance security and transparency in hybrid cloud environments while simplifying operational complexity.
ConclusionGenerative AI is a powerful tool for transforming TechOps in hybrid cloud environments, and many challenges associated with multi-cloud and hybrid cloud setups can be effectively addressed through this technology.
By automating complex tasks, streamlining large-scale data preparation and enabling predictive maintenance, organizations can enhance operational efficiency, lower costs, reduce cyber risks and improve data reliability. However, to fully harness the potential of this technology, it's essential to address ethical considerations and stay informed about emerging trends.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Comments
Post a Comment