Artificial Intelligence Market- Global Industry Size, Share, Trends, Opportunities, and Forecast, 2018-2028



real life applications of ai :: Article Creator

Top 10 AI-powered Applications For Daily Use

DataBot

DataBot is another AI-powered virtual assistant that offers a wide range of services, including voice command recognition, news updates, weather forecasts, translations, and more.

Databot can answer your questions, provide fun facts, and even converse with you. It employs natural language processing and machine learning to understand user queries and deliver accurate, contextually relevant information and responses.

DataBot users can enjoy a hands-free, personalized assistant that caters to their daily informational and organizational needs. The app helps save time, improves productivity, and provides quick access to essential information.

DataBot is free for download on iOS, Android, and Windows devices.

Conclusion

The rapid advancements in artificial intelligence have led to the development of an impressive array of AI-powered apps that can enhance our daily lives in numerous ways. These apps are revolutionizing how we interact with technology, making our routines more innovative, more efficient, and less repetitive.


AI For Security Is Here. Now We Need Security For AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

After the release of ChatGPT, artificial intelligence (AI), machine learning (ML) and large language models (LLMs) have become the number one topic of discussion for cybersecurity practitioners, vendors and investors alike. This is no surprise; as Marc Andreessen noted a decade ago, software is eating the world, and AI is starting to eat software. 

Despite all the attention AI received in the industry, the vast majority of the discussions have been focused on how advances in AI are going to impact defensive and offensive security capabilities. What is not being discussed as much is how we secure the AI workloads themselves. 

Over the past several months, we have seen many cybersecurity vendors launch products A brief look at attack vectors of AI systems

Securing AI and ML systems is difficult, as they have two types of vulnerabilities: Those that are common in other kinds of software applications and those unique to AI/ML.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

First, let's get the obvious out of the way: The code that powers AI and ML is as likely to have vulnerabilities as code that runs any other software. For several decades, we have seen that attackers are perfectly capable of finding and exploiting the gaps in code to achieve their goals. This brings up a broad topic of code security, which encapsulates all the discussions about software security testing, shift left, supply chain security and the like. 

Because AI and ML systems are designed to produce outputs after ingesting and analyzing large amounts of data, several unique challenges in securing them are not seen in other types of systems. MIT Sloan summarized these challenges by organizing relevant vulnerabilities across five categories: data risks, software risks, communications risks, human factor risks and system risks.

Some of the risks worth highlighting include: 

  • Data poisoning and manipulation attacks. Data poisoning happens when attackers tamper with raw data used by the AI/ML model. One of the most critical issues with data manipulation is that AI/ML models cannot be easily changed once erroneous inputs have been identified. 
  • Model disclosure attacks happen when an attacker provides carefully designed inputs and observes the resulting outputs the algorithm produces. 
  • Stealing models after they have been trained. Doing this can enable attackers to obtain sensitive data that was used for training the model, use the model itself for financial gain, or to impact its decisions. For example, if a bad actor knows what factors are considered when something is flagged as malicious behavior, they can find a way to avoid these markers and circumvent a security tool that uses the model. 
  • Model poisoning attacks. Tampering with the underlying algorithms can make it possible for attackers to impact the decisions of the algorithm. 
  • In a world where decisions are made and executed in real time, the impact of attacks on the algorithm can lead to catastrophic consequences. A case in point is the story of Knight Capital which lost $460 million in 45 minutes due to a bug in the company's high-frequency trading algorithm. The firm was put on the verge of bankruptcy and ended up getting acquired by its rival shortly thereafter. Although in this specific case, the issue was not related to any adversarial behaviors, it is a great illustration of the potential impact an error in an algorithm may have. 

    AI security landscape

    As the mass adoption and application of AI are still fairly new, the security of AI is not yet well understood. In March 2023, the European Union Agency for Cybersecurity (ENISA) published a document titled Cybersecurity of AI and Standardisation with the intent to "provide an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of AI, assess their coverage and identify gaps" in standardization. Because the EU likes compliance, the focus of this document is on standards and regulations, not on practical recommendations for security leaders and practitioners. 

    There is a lot about the problem of AI security online, although it looks significantly less compared to the topic of using AI for cyber defense and offense. Many might argue that AI security can be tackled by getting people and tools from several disciplines including data, software and cloud security to work together, but there is a strong case to be made for a distinct specialization. 

    When it comes to the vendor landscape, I would categorize AI/ML security as an emerging field. The summary that follows provides a brief overview of vendors in this space. Note that:

  • The chart only includes vendors in AI/ML model security. It does not include other critical players in fields that contribute to the security of AI such as encryption, data or cloud security. 
  • The chart plots companies across two axes: capital raised and LinkedIn followers. It is understood that LinkedIn followers are not the best metric to compare against, but any other metric isn't ideal either. 
  • Although there are most definitely more founders tackling this problem in stealth mode, it is also apparent that AI/ML model security space is far from saturation. As these innovative technologies gain widespread adoption, we will inevitably see attacks and, with that, a growing number of entrepreneurs looking to tackle this hard-to-solve challenge.

    Closing notes

    In the coming years, we will see AI and ML reshape the way people, organizations and entire industries operate. Every area of our lives — from the law, content creation, marketing, healthcare, engineering and space operations — will undergo significant changes. The real impact and the degree to which we can benefit from advances in AI/ML, however, will depend on how we as a society choose to handle aspects directly affected by this technology, including ethics, law, intellectual property ownership and the like. However, arguably one of the most critical parts is our ability to protect data, algorithms and software on which AI and ML run. 

    In a world

    Although many of us have great imaginations, we cannot yet fully comprehend the whole range of ways in which we can be affected. As of today, it does not appear possible to find any news about AI/ML hacks; it may be because there aren't any, or more likely because they have not yet been detected. That will change soon. 

    Despite the danger, I believe the future can be bright. When the internet infrastructure was built, security was an afterthought because, at the time, we didn't have any experience designing digital systems at a planetary scale or any idea of what the future may look like.

    Today, we are in a very different place. Although there is not enough security talent, there is a solid understanding that security is critical and a decent idea of what the fundamentals of security look like. That, combined with the fact that many of the brightest industry innovators are working to secure AI, gives us a chance to not repeat the mistakes of the past and build this new technology on a solid and secure foundation. 

    Will we use this chance? Only time will tell. For now, I am curious about what new types of security problems AI and ML will bring and what new types of solutions will emerge in the industry as a result. 

    Ross Haleliuk is a cybersecurity product leader, head of product at LimaCharlie and author of Venture in Security.

    DataDecisionMakers

    Welcome to the VentureBeat community!

    DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

    If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

    You might even consider contributing an article of your own!

    Read More From DataDecisionMakers


    Getting Real On AI In Application Security

    AI is definitely the hot topic right now, and a lot of people are throwing around or downright parroting information and opinions. Invicti's CTO and Head of Security Research, Frank Catucci, spoke to Mike Shema on episode #234 of the Application Security Weekly cybersecurity podcast to discuss what, realistically, AI means for application security today and in the nearest future. Watch the full video below and read on to get an overview of AI as it currently relates to application security – and to learn about the brand-new art of hallucination squatting.

    Faster, easier to use, and rife with risk

    For all the hype around large language models (LLMs) and generative AI in recent months, the underlying technologies have been around for years, with the tipping point brought about by relatively minor tweaks that have made AI more accessible and useful. While nothing has fundamentally changed on the technical side, the big realization is that AI is here to stay and set to develop even faster, so we really need to understand it and think through all the implications and use cases. In fact, industry leaders recently signed an open letter calling for a 6-month pause in developing models more powerful than GPT-4 until the risks are better understood.

    As AI continues to evolve and get used far more often and in more fields, considerations like responsible usage, privacy, and security become extremely important if we're to understand the risks and plan for them ahead of time rather than scrambling to deal with incidents after the fact. Hardly a day goes by without another controversy related to ChatGPT data privacy, whether it's the bot leaking user information or being fed proprietary data in queries with no clear indication of how that information is processed and who might see it. These concerns are compounded by the growing awareness that the bot is trained on publicly-accessible web data, so despite intense administrative efforts, you can never be sure what could be revealed.

    Attacking the bots: Prompt injection and more

    With conversational AI such as ChatGPT, prompts entered by users are the main inputs to the application – and in cybersecurity, when we see "input," we think "attack surface." Unsurprisingly, prompt injection attacks are the latest hot area in security research. There are at least two main directions to explore: crafting prompts that extract data the bot was not supposed to expose and applying existing injection attacks to AI prompts.

    The first area is about bypassing or modifying guardrails and rules defined by the developers and administrators of a conversational AI. In this context, prompt injection is all about crafting queries that will cause the bot to work in ways it was not intended to. Invicti's own Sven Morgenroth has created a dedicated prompt injection playground for testing and developing such prompt injection attacks in controlled circumstances in an isolated environment.

    The second type of prompt injection involves treating prompts like any other user input to inject attack payloads. If an application doesn't sanitize AI prompts before processing, it could be vulnerable to cross-site scripting (XSS) and other well-known attacks. Considering that ChatGPT is also commonly asked about (and for) application code, input sanitization is particularly difficult. If successful, such attacks could be far more dangerous than prompts to extract sensitive data, as they could compromise the system the bot runs on.

    The many caveats of AI-generated application code

    AI-generated code is a whole separate can of worms, with tools such as GitHub Copilot now capable not only of autocompletion but of writing entire code blocks that save developers time and effort. Among the many caveats is security, with Invicti's own research on insecure Copilot suggestions showing that the generated code often cannot be implemented as-is without exposing critical vulnerabilities. This makes routine security testing with tools like DAST and SAST even more important, as it's extremely likely that such code will make its way into projects sooner or later.

    Again, this is not a completely new risk, since pasting and adapting code snippets from Stack Overflow and similar sites has been a common part of development for years. The difference is the speed, ease of use, and sheer scale of AI suggestions. With a snippet found somewhere online, you would need to understand it and modify it to your specific situation, typically working with only a few lines of code. But with an AI-generated suggestion, you could be getting hundreds of lines of code that (superficially at least) seems to work, making it much harder to get familiar with what you're getting – and often removing the need to do so. The efficiency gains can be huge, so the pressure to use that code is there and will only grow, at the cost of knowing less and less of what goes on under the hood.

    Vulnerabilities are only one risk associated with machine-generated code, and possibly not even the most impactful. With the renewed focus in 2022 on securing and controlling software supply chains, the realization that some of your first-party code might actually come from an AI trained on someone else's code will be a cold shower for many. What about license compliance if your commercial project is found to include AI-generated code that is identical to an open-source library? Will that need attribution? Or open-sourcing your own library? Do you even have copyright if your code was machine-generated? Will we need separate software bills of materials (SBOMs) detailing AI-generated code? Existing tools and processes for software composition analysis (SCA) and checking license compliance might not be ready to deal with all that.

    Hallucination squatting is a thing (or will be)

    Everyone keeps experimenting with ChatGPT, but at Invicti, we're always keeping our eyes open for unusual and exploitable behaviors. In the discussion, Frank Catucci recounts a fascinating story that illustrates this. One of our team was looking for an existing Python library to do some very specific JSON operations and decided to ask ChatGPT rather than a search engine. The bot very helpfully suggested three libraries that seemed perfect for the job – until it turned out that none of them really existed, and all were invented (or hallucinated, as Mike Shema put it) by the AI.

    That got the researchers thinking: If the bot is recommending non-existent libraries to us, then other people are likely to get the same recommendations and go looking. To check this, they took one of the fabricated library names, created an actual open-source project under that name (without putting any code in it), and monitored the repository. Sure enough, within days, the project was getting some visits, hinting at the future risk of AI suggestions leading users to malicious code. By analogy to typosquatting (where malicious sites are set up under domains corresponding to the mistyped domain names of high-traffic sites), this could be called hallucination squatting: deliberately creating open-source projects to imitate non-existent packages suggested by an AI.

    And if you think that's just a curiosity with an amusing name (which it is), imagine Copilot or a similar code generator actually importing such hallucinated libraries in its code suggestions. If the library doesn't exist, the code won't work – but if a malicious actor is squatting on that name, you could be importing malicious code into your business application without even knowing it.

    Using AI/ML in application security products

    Many companies have been jumping on the AI bandwagon in recent months, but at Invicti, we've been using more traditional and predictable machine learning (ML) techniques for years to improve our products and processes internally. As Frank Catucci said, we routinely analyze anonymized data from the millions of scans on our cloud platform to learn how customers use our products and where we can improve performance and accuracy. One way that we use AI/ML to improve user outcomes is to help prioritize vulnerability reports, especially in large environments.

    In enterprise settings, some of our customers routinely scan thousands of endpoints, meaning websites, applications, services, and APIs, all adding up to massive numbers. We use machine learning to suggest to users which of these assets should be prioritized based on the risk profile, considering multiple aspects like identified technologies and components but also the page structure and content. This type of assistant can be a massive time-saver when looking at many thousands of issues that you need to triage and address across all your web environments. When improving this model, we've had cases where we started with somewhere like 6000 issues and managed to pick out the most important 200 or so at a level of confidence in the region of 85%, and that makes the process that much more manageable for the users.

    Accurate AI starts with input from human experts

    When trying to accurately assess real-life risk, you really need to start with training data from human experts because AI is only as good as its training set. Some Invicti security researchers, like Bogdan Calin, are active bounty hunters, so in improving this risk assessment functionality, they correlate the weights of specific vulnerabilities with what they are seeing in bounty programs. This also helps to narrow down the real-life impact of a vulnerability in context. As Frank Catucci stated, a lot of that work is actually about filtering out valid warnings about outdated or known-vulnerable components that are not a high risk in context. For example, if a specific page doesn't accept much user input, having an outdated version of, say, jQuery will not be a priority issue there, so that result can move further down the list.

    But will there come a time when AI can take over some or all of the security testing from penetration testers and security engineers? While we're still far from fully autonomous AI-powered penetration testing (and even bounty submissions), there's no question that the new search and code generation capabilities are being used by testers, researchers, and attackers. Getting answers to things like "code me a bypass for such and such web application firewall" or "find me an exploit for product and version XYZ" can be a huge time-saver compared to trial and error or even a traditional web search, but it's still fundamentally a manual process.

    Known risks and capabilities – amplified

    The current hype cycle might suggest that Skynet is just around the corner, but in reality, what seems an AI explosion merely amplifies existing security risks and puts a different twist on them. The key to getting the best out of the available AI technologies (and avoiding the worst) is to truly understand what they can and cannot do – or be tricked into doing. And ultimately, they are only computer programs written by humans and trained by humans on vast sets of data generated by humans. It's up to us to decide who is in control.






    Comments

    Follow It

    Popular posts from this blog

    Dark Web ChatGPT' - Is your data safe? - PC Guide

    Christopher Wylie: we need to regulate artificial intelligence before it ...

    Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions