What is natural language processing? | Definition from TechTarget



rpa ai :: Article Creator

Leveraging RPA, AI And Automation In Government Processes

The government continues to grapple with a cybersecurity staff shortage, despite recent attempts by the Biden administration to fill thousands of vacant positions.

Fortunately, agencies are harnessing a growing technology area to meet some of their staffing needs: robotic process automation.

"Across the government, agencies are short-staffed, and they are seeking ways to accomplish more work but with the same staff count," says Steve Shah, senior vice president of product at Automation Anywhere. "You need to make the team more productive, so there's a huge payoff in using automation. With so much data coming in, you'll want to use accelerators such as automation to speed up that process."

The cyber talent shortage is a prime example of where RPA can serve as a force multiplier, taking over repetitive, rules-based processes that require little coding and that can be deployed with minimal staff training. Employees then have more availability to take on higher-level tasks supporting the agency's mission.

Click on the banner to learn how platform engineering works.

Beyond standard RPA, recent artificial intelligence innovations have dovetailed with RPA to provide another option: AI-enhanced RPA. And stand-alone AI is also finding applications within agencies.

Careful planning and appropriate implementation of these technologies goes a long way toward addressing not just the cyber talent shortage but other staffing- and process-related challenges hindering effective government business operations.

How Are Agencies Using Robotic Process Automation?

The General Services Administration pioneered the use of RPA in federal agencies back in 2018, launching ten automations, or bots, with a goal of reducing employees' workloads and enhancing the experience of submitting budget justifications to GSA. Since then, RPA bots have been adopted across agencies for a wide variety of tasks, acquisition, administrative and financial business functions in particular. Some examples include:

  • Acquisition: The Defense Logistics Agency rolled out RPA to close out long-term contracts and agreements in sustainment, restoration and modernization.
  • Administrative: NASA is using bots to monitor its budget and accounting mailbox for emails about working capital fund advances.
  • Financial: The Navy implemented RPA bots to take screenshots of authorization letters for purchase orders.
  • "With RPA, you want to make sure it is a very defined process with very clear goals," says Terry Halvorsen, vice president of federal client development at IBM. "The automation is just replacing the process. You are applying it to tasks that are repetitive, mundane or rules-based."

    "The federal space is dominated by paper, and cross-department automation is not common," Shah says. "They often ask, 'Where can we find savings?' Applying automation to back-office tasks has been happening. The next step is to bring this to the front office, where automation can integrate with existing workflows and bring large cost reductions."

    The Challenges of RPA

    While RPA is a good fit for repetitive, rules-bound processes, it cannot be deployed for tasks that require even limited decision-making. This restricts the range of its applications. AI-enhanced RPA opens the door to some of those use cases.

    "One of the key differentiators between RPA and AI-enhanced RPA is the level of autonomy and task specificity," says Amy Spruill, senior vice president and managing director of U.S. Regulated industries at SAP. "RPA is more about configuration and simple rules and is applied to specific, preconfigured tasks. AI-enhanced RPA sits in the middle, blending the specific task orientation of RPA with the broader capabilities of AI."

    AI goes one step further. For a given workflow, AI provides cognitive automation — imitating how a person thinks and learns. It can make decisions on its own. For example, when coupled with computer vision technology, AI can read unstructured data, such as a handwritten invoice, and decide how to respond and process it.

    "AI has more autonomy and can handle a wider range of tasks," Spruill says. "It can 'understand' a document in a way that algebraic rules simply cannot. AI use cases are plentiful, from finding errors or patterns in finance and supply chain to assistance for pilots, surgeons or maintenance techs."

    The Challenges of AI

    The role of human beings in the decision-making loop is one of the key differentiators between RPA, AI-enhanced RPA and AI technologies. Across the spectrum of these technologies, RPA requires less direct human involvement, while AI requires human oversight.

    "If it's just RPA that's automating a process that is very defined, it is automated. But if AI is included, then humans should be in the loop," Halvorsen says. "If we look at the cybersecurity example, we might be using AI, but it is still an analyst that is making the final decision. Where in the loop do they fit? That's the question for many agencies using RPA with AI."

    Another challenge of using AI with automation is managing the data that it is trained on.

    "With AI-enhanced RPA and AI, the quality of the data going in is important. You need to have humans check that," Halvorsen says. "If there's a deviation or you have to apply a recommendation to the process, you really need to make sure the data sources are transparent, accurate and auditable, and you need to understand how that data could be impacted by attackers."

    RELATED: As artificial intelligence evolves, so do threats.

    RPA vs. AI vs. AI-enhanced RPA

    For agencies struggling to fill positions or looking to drive greater efficiencies in their operations, RPA, AI-enhanced RPA and AI offer different paths to meeting these needs. Part of the challenge is determining the right technology for the task.

    If you can define workloads into set rules or tasks, then it's probably suitable for automation via RPA," Spruill says. "AI helps when you need intelligence or advanced logic, but all of this requires having a clear strategy from top-level management. Start building teams that can gather user needs and use cases, then develop functional requirements."

    In thinking about where to apply these technologies, it's also helpful to look beyond the individual task and tech. Understanding where a single task fits into a larger process or use case can help determine the best technology option for the job. This requires management to think more holistically, with an orchestration mindset.

    DISCOVER: Platform engineering is the "natural evolution" of DevOps.

    "We are now seeing a shift toward process orchestration," Shah says. "With intelligent process automation, we can now think at an orchestration level. I have 20 tasks to automate, not a single task. That's big-picture automation. You end up with a richer set of processes rather than simple tasks. Long-running processes take the idea a step further by no longer depending on something happening immediately. They can wait until the necessary action occurs, such as the delivery of a package, before moving forward with the process."

    Measuring Success with Automation and AI

    At GSA, recent findings from an inspector general review found that the agency lacked evidence supporting the purported work-hour savings it had claimed for its ongoing RPA program. Being able to measure and define success accurately should be a key component of any RPA, AI-enhanced RPA or AI implementation.

    "I'd use three rules to measure success with automation and AI," Spruill says. "How much time is saved from the task? How much time is saved from an employee's week? And how much time is saved by the team? If you want more precise metrics, bring in a lean engineer to perform value stream mapping, but these three make great starting benchmarks."


    3 Key Approaches To Data Center Automation: RPA Vs. AI Vs. Intelligent Automation

    How Do RPA, AI and Intelligent Automation Differ?

    While RPA, AI and intelligent automation are all powerful tools, they offer different capabilities. For state and local agencies looking to dial back the hands-on work needed to keep data centers humming along, it's important to understand the differences.

    RPA is the entry-level approach. It typically involves "using software robots or bots to automate a repetitive task," says Francisco Ramirez, Red Hat's chief architect of state and local government.

    "RPA can mimic human actions by following rules-based tasks. That would be used to improve organizational efficiency and reduce errors in manual processes — tasks that follow very specific rules and rarely deviate from them," says Jamia McDonald, principal of the government and public services practice at Deloitte.

    AI, by comparison, "can learn and improve and provide net new output," she says.

    This makes AI "a little bit more involved," Ramirez says. "It's the development of algorithms that enable the machines to perform tasks that typically require human intelligence. That'll include things like machine learning and natural language processing."

    Intelligent automation is the next evolution, he says. "It uses automation technologies such as AI and RPA together to streamline and scale decision-making across organizations."

    How Can State and Local Agencies Best Use RPI and AI?

    Given the different capabilities of each tool, it's logical to consider them individually for specific use cases within the broader data center automation effort.

    "You would probably incorporate RPA in automated monitoring and alerts. You could also look at routine maintenance tasks — backups, data transfer, system updates — because RPA works well with structured data," Ramirez says. "When you're monitoring server health and network performance, that data is normally in a tabular format somewhere, and you can help generate alerts based on that. It's very structured data."

    To support data center security, RPA could be programmed to look for a known threat.

    "It would say: We see people log in to this system and try to get into this system," McDonald says. In addition, "you might use an RPA to monitor system health if you have a certain server serving a specific program. If there's a peak season for a government application, RPA can ensure that the servers and system maintain their capability during that peak so there's no interruption of service."

    With a more sophisticated AI toolset, "you would look at predictive analytics, analyzing historical data to foresee potential issues in the data center," Ramirez says. "You could do dynamic resource allocation, looking at real-time demand and determining where the most efficient use of computing power storage and network resources will be."

    How Can Intelligent Automation Help Governments?

    In a data center, AI monitors system health and safety and identifies patterns. "It can monitor for cyberattacks, and then learn and adapt to how hackers and other people are presenting system threats," McDonald says.

    In addition, she says, "the AI could generate new outputs. It can build a dashboard to say: Here's a real-time report telling you that these activities have happened, and here are some recommended actions based on our policy."

    In terms of system health, "you can use AI to monitor downtime, system capacity or other things that speak to operational health — before something happens," McDonald says. "If AI could detect that you are going to have a server load, it can shift the server distribution because it sees that coming."

    With intelligent automation, "you can implement or automate complex end-to-end processes within the data center, from resource provisioning to troubleshooting and resolution," Ramirez says.

    "For example, you would use AI to predict the maintenance requirement, RPA to do the actual patching, and intelligent automation to address the workflow," he says.

    DISCOVER: These counties benefitted from upgrading their on-premesis data centers.

    What Are the Pros and Cons of These Tools for Agencies?

    Each tool has plusses and minuses, depending on the desired outcome within the data center.

    "The pros of RPA are that it's pretty quick to implement and relatively cost-effective for repetitive tasks," Ramirez says. "The con is that it has limited capabilities, and it requires structured data and well-defined rules. You need a solid understanding of what's taking place in the process for the bots to be effective."

    AI, meanwhile, "offers the power to create, and being creative with AI can make service delivery more accessible and more personalized," McDonald says. "The con is the potential for bias, hallucination and data inaccuracies. It requires both significant computing resources and training time."

    For intelligent automation, "the pros would be that it will integrate with rule-based and cognitive automation. It can address a wide range of tasks," Ramirez says.

    "The con is that it can be complex to implement, and it's probably going to require a higher initial investment," he says. "And once you have a system like that in place that is doing many things, you'll want to make sure it is up to date and maintained."

    LEARN: Here's how automation delivers economic advantages for governments.

    How Do State and Local Governments Determine the Best Option?

    To align the tool to the task, "governments should understand the business problem they're attempting to solve," McDonald says. "In a data center, that may be improving cybersecurity, issues with scalability or incident response times, or siloed teams. When you know the business problem, the tools become more self-evident."

    Taking a deep dive into the business need is critical to ensure that your agency is leveraging the appropriate technology.

    "Assess the processes you're trying to automate. That's going to help you determine which is best to use," Ramirez says. "If it's a repetitive process, that might be good for RPA. If it requires cognitive capabilities and decision-making, that might be good for AI, and if it requires end-to-end process automation, that's a good fit for intelligent automation."

    If the process in question uses highly structured data, "then RPA may suffice," he says. "If it's unstructured and very complex, that's where AI and intelligent automation come into play. Then you need to look at budget and resources, because those use cases get incrementally more resource-intensive."

    Taken together, these automation tools promise to have a significant impact as state and local governments look to improve their data center operations and drive efficiencies in the workforce.

    "When you think about what an artificial intelligence future might look like in a data center, it's going to be faster response times, higher efficiency, tighter communication and better predictability," McDonald says.


    A 'Process' For The AI Economy, Appian Weaves Richer Textures Into Data Fabric

    Appian CEO Matt Calkins: The role of humans is actually going to become 'more human' as we continue ... [+] to enter the AI economy

    2022 LAURA LEANDRA

    Low-code has elevated, pun intended. The rise of low-code application development technologies designed to perform software architecture and workflow process automation and acceleration has been hugely prevalent in the way modern (increasingly cloud-native) IT systems have been built and operated. Software platforms at this level now have an essential responsibility to work with data in a unified, secure, discoverable and optimized way - and this is a reality significantly amplified by the arrival of generative Artificial Intelligence (gen-AI) and its impact on business operations and human welfare.

    That might sound like a grandiose exposition statement, but it encapsulates the methodologies Appian has grounded its platform development on and reflects its CEO's rather exacting analysis of the business data landscape today.

    Often regarded as a low-code software platform company, the organization now spans a wider purview encompassing process mining and unified information functions through the provision of its data fabric technology. Now some 25 years on from its initial formation (and still with the same four founding fathers on its board), the low-code Appian of yesterday is now more of a holistic data-centric organization seeking to explain how its Appian Data Fabric, ProcessHQ and other tools work as a means of underpinning the way responsible, functional and practical AI works in the near and immediate future.

    What is a data fabric?

    By way of explanation, Appian defines a data fabric as, "An architecture layer and toolset that connects data across disparate systems and creates a unified view. It is a virtualized data layer. That means users don't need to migrate data from where it currently lives, say in a database, Enterprise Resource Planning (ERP), or Customer Relationship Management (CRM) application. The data may be on-premises, in a cloud service, or in multi-cloud environments."

    Also sometimes referred to as an ontology (i.E. The scientific study of how entities exist and how they are grouped), a data fabric is also described as a semantic later that can be used to represent the place, shape and value of corporate data so that it can be communicated with via a common dialect - and as if it were all local, even when some of it is dissagregated. As such, a data fabric is argued to be an ideal way to control the use of different information sources in the new era of AI.

    Acutely aware of the need to enable users with what he calls 'data rights' in relation to the way enterprise and personal software is deployed and used today, Appian CEO and co-founder Matt Calkins has already called for something of an AI epiphany that needs to happen globally. This awakening would involve us grasping the realization that we need to move onward from the AI hype that suggests it is some 'existential threat' to humanity and start to think about data privacy issues and the need to lock down information streams more tightly so that we can focus on using AI to actually increase business (and indeed personal) productivity.

    Having previously explained that AI itself is not just a Machine Learning (ML) algorithm tasked with whirring away on some cloud service in a datacenter somewhere, Calkins reminds us that AI, crucially, is also a data access and data synthesis mechanism too. The calmly spoken software engineer turned corporate leader is profoundly aware of what has been happening in the wider AI space, with leaders in this space often emerging as a result of sheer weight of influence.

    It's what Calkins calls 'a monopoly not earned' and he says that his vision for AI and data is not about beating the Turing Test (a measure of whether a machine's ability to exhibit intelligent behavior is indistinguishable from that of a human), our focus with enterprise software at all levels should be about extracting opportunities for really increasing productivity.

    Drawing upon the Appian tagline 'Orchestrate Change', does the Appian CEO feel that humans are about to switch from being workers to some new notion or 'orchestrators' that sit over AI engines as the machines do all the work?

    Humans will be 'more' human

    "That's not quite what's going to happen," advised Calkins, speaking to press and analysts in person this April 2024. "The role of humans is actually going to become 'more human' as we continue to enter the AI economy i.E. We won't have to spend time working on tasks that are essentially repetitive and rote, we will be able to put people at the center of process control so that AI becomes a member of the team, but the team in and of itself. But to communicate with AI (as a worker) we will need a common language and that part takes some work. Spoken human language are verbose and imprecise, so they're not a good medium for creating logical instructions. Instead, we're going to need an intuitive graphical representation as our gateway to elegantly handling AI workloads of all shapes and sizes."

    This level of validation helps describe why low-code is no longer 'just' low-code that can be used to knock out a new retail app a bit quicker; we're now at a point where a platform like Appian has been engineered to provide end-to-end process automation. That means this technology should be used as more than 'just a bot' in Calkin's terms i.E. It's process and data control that spans Artificial Intelligence, Robotic Process Automation (RPA), Application Programming Interface (API) integration and an ability to dovetail and integrate all of the above with human workflow process down to the individual task level.

    Deeper into business processes

    All of this context brings us to where and how the company is now updating the latest version of the Appian Platform. The new release introduces Process HQ, a combination of process mining and generative AI unified with the Appian data fabric. Process HQ promises to provide visibility into business operations to enable data-driven decisions and process improvements. The latest version of Appian also extends the practical value of generative AI through enhancements to Appian AI Copilot and the prompt builder AI skill.

    The company reminds us that business users need greater visibility into the full breadth of their enterprise data and processes in order to maximize operational efficiency and strategic decision-making. By combining the latest technologies in data fabric, process mining, machine learning and generative AI, Appian Process HQ is said to help monitor and improve every business process built on the Appian platform.

    That's an important point of clarification i.E. While Appian process technologies can be 'pointed at' software services and data resources that have been created on other platforms, the company's deeper analytics and data control tools as showcased are more effectively applied when kept inside their own wheelhouse.

    "Every organization wants to better understand their processes and find ways to improve them, but traditional process mining can be a daunting investment and doesn't always lead to actionable insights," says Michael Beckley, Appian's CTO. "Thanks to Process HQ and Appian's unified process automation platform, we're streamlining the journey from insight to action with low upfront investment, deeper insights and the ability to rapidly act on improvements." Beckley further states that Appian Process HQ makes it easy to reduce costs, risks and delays, improve compliance and drive better business outcomes, without the need for costly and time-consuming data collection efforts.

    Process HQ includes:

    Inside the Process HQ outer wrapper we find Appian Process insights. This technology lets business users without a background in process mining or data science uncover insights and explore their business processes through an AI-powered analysis of their workflows. Process insights uses detailed audit information of both human and automated activity captured in Appian's data fabric, providing visibility without a substantial effort. It uses AI to identify and quantify bottlenecks, errors and delays and provides intelligent recommendations for process areas with the most improvement potential. Users follow a guided experience to drill deeper into the details and can then quickly act on process improvements using Appian's process automation capabilities, all within a secure, enterprise-grade platform.

    Also inside Appian Process we find Appian Data Fabric Insights, allowing business users to explore enterprise data and build custom reports and dashboards. When partnered with Appian AI Copilot, users can gain new insights faster. Data fabric insights makes report creation possible for business users without any Appian development knowledge and also allows them to answer common business questions faster, without needing to rely on a data expert or developer to build a report. The company claims that organizations can save significant time and money with these capabilities and can be confident that only the right users can view certain secure data.

    Generative AI enhancements

    A new gen-AI enhancement is also offered here in the form of Appian Prompt Builder AI skill. Business users can now not only create their own prompts, inputs and outputs, but they can also use generative AI prebuilt prompts for common use cases, including summarization, text generation, entity extraction etc. By presenting a curated list of common and suitable use cases, the prompt builder skill simplifies prompt generation, enabling users to start from a contextually relevant prompt and efficiently generate responses.

    Appian AI Copilot is said to be able to ease some of the most tedious development tasks by generating sample data. Users can simply specify the desired number of rows and let the AI copilot handle the rest, generating data for individual records and for complex sets of related records. Ideal for unit testing, user acceptance testing and stakeholder demos, AI Copilot accelerates the development lifecycle while ensuring the availability of realistic data for testing and demonstration purposes.

    "It also uses generative AI to enhance test case generation, addressing one of the most time-consuming tasks developers face by suggesting test cases aligned with users' business roles and ensuring comprehensive coverage and accurate execution of business logic," notes Appian's Ross at team, as part of the firm's annual product update statements.

    Fewer software engineers?

    Will all this mean we end up with more software developers and more apps, or fewer engineers and a smaller number of overall applications? The answer, always, is more software developers and more applications and more data services. It's also for sure that we'll an increasing number of of software engineers using more AI-enriched optimizations and specifications.

    The moves here from Appian see the company drive its platform forward to offer a higher level of orchestration and drive generative AI enhancements forward to create more meaningful process improvement and continuous optimization. It's always a question of more developers, but now it's also a question of more developers with a wider, bigger and sharper set of tools.

    What might be most important here is the opportunity to consider Appian CEO Calkin's world view opinions on what we should be concerning ourselves with in terms of data privacy and security controls so that we can take advantage of process and code automation functions to the full. As they say, with more automation comes great responsibility.

    The road to eliminating subjectivity

    Low-code is still in its ascendancy and experiencing high times in terms of adoption, extension and possibly a fair degree of hype too. As we now start to encapsulate more software automations into composable sometimes reusable blocks, let's also take on and shoulder a commensurate amount of data control too.

    Appian CEO Calkins is famously outspoken enough to say that AI is not yet smart enough to make human decisions all alone on its own; with the notion of a data fabric as a common semantic to make all enterprise information resources feel local wherever it is, it's a question of the difference between data access and data control.

    Looking immediately ahead, we now need to orchestrate humans, machines, software bots and AI engines together through a process of elastic process mining and process management so that - as Appian co-founder and chief technology officer Michael Beckley puts it - we need to take all the subjectivity out of decisions so that they are based on data and facts. There's a process behind how we use data for AI and that very process (and the microprocessor processing it drives) is rich in process intelligence.






    Comments

    Follow It

    Popular posts from this blog

    Reimagining Healthcare: Unleashing the Power of Artificial ...

    Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions

    What is Generative AI? Everything You Need to Know