150 Top AI Companies of 2024: Visionaries Driving the AI Revolution
How Blackbox AI Is Changing Machine Learning
Blackbox AI are complex models that provide minimal insight into their decision-making progress. Machine learning (ML) models are interpretable and provide an understanding of how a decision has been reached.
The difference in how Blackbox AI works than traditional ML models brings challenges and advantages to this dynamic field.
Let's dig deeper to understand the ways it's revolutionizing the landscape of machine learning.
1. Increased Predictive Power
An attractive application that emerged from Blackbox AI is the direct ability to obtain predictive accuracy. Deep learning neural networks used by Blackbox AI can detect sophisticated patterns and correlations in large datasets unavailable for simple models. This has led to breakthroughs in healthcare, finance, and autonomous systems as each relies heavily on complex and nuanced relationships present in the data.
2. Automated Feature Engineering
Blackbox AI automates feature engineering, reducing the need for manual input from data scientists. This saves time and allows new features that can't be derived traditionally, to be discovered through the model. This in turn enhances the overall performance of the ML model. Thus, helping organizations extract better insights from their data.
3. Challenges in Interpretability and Trust
Black box models lack transparency, which is a criticized aspect. It is hard to be unaware of the way such models reach a specific conclusion. This is concerning in sensitive sectors like healthcare or criminal justice. Therefore, the decisions based on the Blackbox AI model's outputs there can have serious consequences.
Lack of interpretability is a reason for distrust among the stakeholders, including end-users and regulatory bodies. This may mean tightened regulations for such AI models and ML models in general.
4. Explainable AI (XAI)
In response to the challenges faced by black box AI models, the area of research called XAI (explainable AI) emerged, changing the landscape of machine learning completely.
Researchers are trying to construct interpretable, transparent, and predictive AI models with the help of XAI. Technicians are also working on developing techniques and tools that can better explain the choices made by the Blackbox models.
5. Industry Adoption and Applications
Blackbox AI is increasingly being applied to applications like fraud detection and personalized marketing across industries. The technology makes it possible for organizations to make data-driven decisions rapidly. However, the cases of its adoption raise crucial considerations regarding the ethical implications and responsible use of AI.
6. Future Directions
In the future, developing performance with interpretability would be an essential area of focus in Blackbox AI. The application of techniques by XAI in Blackbox models will be important to foster trust and acceptance amongst users. More advancements in computation capabilities and availability of data would mean stronger capabilities for these models and would take what is possible within machine learning to the next level.
7. Cooperative Intelligence
Blackbox AI marks the beginning of an era where human expertise complements machine capabilities. Firms are learning that while automatic systems can be relied upon, human judgment should be used to achieve accurate results.
This synergy has led to making more informed decisions as humans can interpret AI recommendations while also bringing in contextual things that a machine might miss. In that respect, industries can blend human and machine strengths to get the best out of complex scenarios.
Blackbox AI is changing the game in machine learning with more predictive power and automated processes. However, it also comes with its own set of challenges of interpretability that require more research efforts toward developing explainable AI solutions. The future depends upon striking an optimal balance between accuracy and transparency to realize the full potential of these models. The companies also need to ensure that the applications of AI remain trustworthy and accountable.
Three Ways AI/ML Will Transform The Identity Governance Space
Benoit Grangé, chief technology & product officer at Omada.
gettyAs organizations move increasingly faster through the digital world, it has become more important to give employees frictionless access to the tools they need for their jobs—lest productivity suffer. However, manually maintaining, approving and recertifying identity has long since become an impossibility, leading more organizations to seek out solutions with automated capabilities.
Artificial intelligence (AI) and machine learning (ML) can help with automation. In fact, they have the potential to transform how organizations conduct identity governance and administration (IGA) today.
In Search Of IGA EfficiencyLeaders are looking to balance productivity with security—which means they need the ability to quickly and accurately certify and recertify identities, provide access and so on. When it comes to identity security and governance from a business perspective, leaders are most concerned about how they can control the ecosystem.
There is a massive risk today of ransomware; if access isn't secure, then neither is your infrastructure. How do they address all of the access they have to manage across all of the different environments? How do they make sure end users are getting the right access at the right time?
Another issue is managing employee and contractor identities and access in a quick, compliant and effective way. How are organizations going to onboard identities faster? How can they deliver just-in-time access to people? That's where the market is moving: providing access to an application within 30 minutes, for instance.
Improving Identity Governance: AI And ML's Expanding RoleTo meet today's identity demands, organizations can't rely on manual activity alone. AI, ML and automation brings the potential to accelerate and improve this function. Here are three ways AI/ML can help:
Enhanced Data Mapping And Cleaning Using Large Language Models (LLMs)Traditionally, the integration of IGA with a company's ecosystem required substantial manual effort to ensure data consistency and accuracy. However, LLMs can analyze vast amounts of unstructured and structured data—identifying patterns, anomalies and relationships that might not be immediately apparent to human administrators.
By leveraging these capabilities, LLMs can automate the data mapping and cleaning process, ensuring the IGA system integrates seamlessly with the company's existing infrastructure. This reduces the time and effort required for setup, leading to quicker deployment and more reliable data governance.
Intelligent Workflow Creation And Automation For AdministratorsUsing advanced algorithms, AI can analyze the organization's current processes and recommend optimized workflows that enhance efficiency and compliance. Administrators can interact with AI to design, test and implement workflows tailored to their specific needs.
This not only simplifies the workflow creation process but also ensures the workflows are robust, secure and in line with regulatory requirements. Additionally, AI/ML can continuously monitor these workflows, making real-time adjustments and recommendations to improve performance and address emerging security threats.
AI/ML Assistants Integrated With Everyday Tools For End UsersThe integration of AI/ML assistants into everyday tools like Microsoft Teams provides real-time support and recommendations, helping users with tasks such as access requests, role assignments and compliance checks directly within their familiar work environment. By integrating AI/ML capabilities into tools that users already use daily, the adoption and execution of IGA functions become more intuitive and less disruptive.
For instance, a user needing access to a particular system can interact with the AI assistant in Teams to request access, receive automated approvals based on predefined policies and get immediate feedback or instructions. This seamless integration not only enhances user experience but also ensures IGA processes are followed consistently and efficiently.
Getting Started With AI/ML In IGA: Best PracticesIt's a good practice to consider where some of the quick wins are in terms of deriving value from AI/ML in IGA. LLMs can help the user better understand what they're doing, providing transparency. Even something as basic as being able to rely on an LLM to better explain to the user what they're approving is a big step. Many times, administrators don't understand what they're approving. It's really about making the user experience simpler and better.
Another best practice is to always keep data privacy front and center. There are always concerns about data leakage and data privacy, which have to be addressed in the IGA realm. Too often, AI can be something of a black box. When working with a vendor, transparency is key. Make sure they can explain what data the AI models are ingesting and how that data will be protected.
Setting Up For IGA SuccessAI/ML is set to transform the IGA space by enhancing data integration processes, automating workflow creation for administrators and integrating intelligent assistants into everyday tools for end users. These advancements will accelerate the setup and integration of IGA systems, improve the accuracy and reliability of data, and make it easier for both administrators and end users to manage and comply with identity governance policies. Identity leaders need to understand these changes to optimize access and efficiency in a cloud-centric environment.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
What Does A Data Engineer Do?
MONTE CARLO, MONACO - MAY 11: DS TECHEETAH engineers prepare for the race on the grid during the ... [+] Monaco E-prix at Monte Carlo on May 11, 2019 in Monte Carlo, Monaco. (Photo by Sam Bloxham/LAT Images)
LAT ImagesData engineers engineer. Obviously they do, the clue is in the name. But what does a data engineer really do as part of their daily routine and how does it impact us as users of enterprise software applications and data services today? Given that not all software developers (aka programmers) would necessarily classify themselves as data engineers, how do the two disciplines differ and yet still dovetail enough to form a positive symbiotic bond?
As opinions on this subject now start to proliferate, we need to consider just how many software engineers would consider themselves to be data engineers and, crucially, we need to know just how far their skills and competencies extend in this regard. What kinds of technologies would data engineers get their dirty hands on if it's not the "command line" code that developers love to squelch their fingers into? Will data engineering become increasingly AI-driven and will this role change as a result?
That's a lot of data engineering questions, so let's start with the easy one.
What Is A Data Engineer?"It's a moving definition really, because the role of the data engineer itself is changing," advised Alois Reitbauer, chief technology strategist and head of open source at observability company, Dynatrace. "Once we start moving data science and data engineering into the world of application delivery, we immediately need both business experts and 'technology practice' specialists (i.E. Security professionals, performance gurus, debugging experts) all working together in a unified inter-disciplinary team."
Reitbauer is realistic and says that, really, we're not there yet with data engineering and the skills are not there in many developer communities, which may well be a part of why the idea of these new hybrid work teams is validated. From inside Dynatrace (a company traditionally known for its cloud application performance management services), he says customers are operationalizing and consolidating the way they run their data workloads. This means that data engineering teams need to adopt new modern operations practices like a Site Reliability Engineering mindset and operations people need to learn to manage a new type of data-intensive workload. Kubernetes seems to become the consolidation platform for this effort.
"Imagine if we needed to build large image models to create AI services that would show people how to change a filter on a vacuum cleaner (for want of a pretty random example) today. All that image and video data would need to be digitized and assembled into order via some fairly complex data engineering techniques. Those skills are not being taught as part of a software curriculum in universities and they rarely exist in the real world. We need to start planning now for the data engineering application use cases of tomorrow before they even happen," said Reitbauer.
This is an exceptionally fluid topic in information technology circles i.E. Some data engineers only feel comfortable using this job descriptor designation if they get to call themselves data streaming engineers, because they work with real-time streaming technologies like Apache Kafka. We need to tread carefully here.
Production Data Pipelines"Data engineers are critical to the success of data science and AI teams for two main reasons. Firstly, they build the data pipelines that provide the training and experimentation data that data scientists use to conduct analyses, build ML models and design data, ML and AI products. Secondly, they build the production data pipelines that feed the production models and ML/AI pipelines that data scientists and ML engineers build," detailed Dr. Kjell Carlsson, head of AI strategy at Domino Data Lab.
Carlsson reminds us that, in practice, all data scientists need good data engineering skills because they will constantly need to access additional data, build on the data pipelines they get access to and create new features from this data. Sometimes they find themselves needing to build the production data pipelines as well, although he agrees that this isn't a recommended practice since specialized data engineers will usually build more efficient and more robust pipelines.
"The best organizations embed data engineers in their data science and AI teams to streamline ongoing collaboration throughout the model development process and get faster time to value, better performance and more robust ML and AI applications by doing so," said Domino's Dr Carlsson. "With the advent of generative AI, many observers and business leaders incorrectly presumed that data science and AI expertise was no longer necessary and that data engineers, software engineers would be all that is necessary to build AI applications. Unfortunately, this has been a large contributor to the plethora of proof of concept projects that have failed to be put into production and the growing disillusionment with generative AI as a transformative technology."
Successful data engineering for enterprise generative AI use cases in customer service, tech & biopharma and expert ML & AI skills could provide us with some clues as to the data engineering skills needed today. We need data experts capable of understanding a business problem, decomposing it so that it is solvable using ML and AI methods, designing and iteratively developing, testing and validating a solution aligned to the business need and creating a robust ML/AI pipeline that orchestrates a range of new AI technology components.
"These are not skills that data engineers (or developers) are trained on, nor do they have the opportunity to acquire these skills as part of their normal roles. However, data engineers often have excellent prerequisites for becoming successful data scientists, ML engineers and the new role of 'AI engineer', it just requires additional training, opportunity and experience," surmised Domino's Carlsson.
Breathing Life Into AI AspirationsJohn Roese, global chief technology officer & chief AI officer at Dell Technologies says that every AI aspiration will need data engineering, as without a modern foundation of data, the AI outcomes we desire will never happen. He reminds us that modern data environments are no longer defined by individual components such as databases, extract-transform-load ETL systems, analytic tools etc. Instead, modern data systems are a connected set of technologies that create pathways between sources of data - sensors, apps, telemetry systems, customer portals - and systems that can distil and extract value from that data, including AI chatbots, AI analytic tools, big data tools and agentic systems.
"These modern data systems, though, are not about creating connections between single sets of data and single tools to use that data; instead, the modern data systems create a fabric or mesh that allows these modern tools to access data from a range of sources and create real combinatorial insights; think of AI models that can organize content across many sources, AI's that can understand and optimize the entire selling process and AI's that can understand and engage with the whole customer experience from acquisition to services. Today and in the future, data engineering is the skill that defines, implements and operates this modern data system," said Roese.
As per one of our central questions posed at the outset here, does Roese feel data engineering will become increasingly AI-driven and will the role change as a result?
"Yes! A modern enterprise today could have petabytes to exabytes to even zettabytes of internal data across all sources (this is many times larger than the data used to train the largest LLMs today). Applying AI to that modern enterprise taps into a data ecosystem so large and complex that it would be impossible for any human being to understand or operate manually. Because of that, the only path for modern data engineering to succeed is the aggressive adoption of AI tools and systems to scale the human efforts needed. However, we do not believe that the early co-pilot tools (tools that augment a human doing a task) will be enough," said Roese.
He underlines the way things are moving and says that, fortunately, we are now entering the era of AI autonomous agents, so the inevitable path is that the human data engineer will become less of a doer of tasks and more of an orchestrator of autonomous agents that can do the work. Roese asks us to imagine a human data engineer who sets the intentions and guidelines of a data strategy but then delegates work in planning, architecting, implementing and even operating to specialised AI agents that execute those tasks collectively.
"As they work, the human is in the loop to make sure they are not getting stuck, that they are aligned with the business objectives and that, when new efforts are needed, that work can be defined and delegated to the right agents. This idea of agents working for and with the data engineer gives us a path to operate at petabyte, exabyte, or even zettabyte scale data systems. Given this, the data engineering role will evolve to be an expert in modern data architecture and a leader of teams of humans plus autonomous agents that will do the work needed to deliver modern data outcomes in the AI era," concluded Roese.
Our Survey SaidAs this debate plays out, extends and solidifies, it is not uncommon (at the time of writing) for technology vendors to hold live QR-code powered audience polls during annual conference keynote sessions asking attendees how many of the assembled identify as data engineers today.
The number is increasing all the time, but the majority of those who say yes would probably also say that they are essentially software application development professionals in the first instance. As this role cements itself and becomes more accurately codified, we may perhaps look back on the essentially quite fluid comments made here at the end of the current decade and see how far we have progressed.
A row of Topo personal robots, at their assembly facility in California. (Photo by Roger ... [+] Ressmeyer/Corbis/VCG via Getty Images)
Corbis/VCG via Getty Images
Comments
Post a Comment