Large language models encode clinical knowledge
Researchers Discover New Vulnerability In Large Language Models
Large language models (LLMs) use deep-learning techniques to process and generate human-like text. The models train on vast amounts of data from books, articles, websites and other sources to generate responses, translate languages, summarize text, answer questions and perform a wide range of natural language processing tasks.
This rapidly evolving artificial intelligence technology has led to the creation of both open- and closed-source tools, such as ChatGPT, Claude and Google Bard, enabling anyone to search and find answers to a seemingly endless range of queries. While these tools offer significant benefits, there is growing concern about their ability to generate objectionable content and the resulting consequences.
Researchers at Carnegie Mellon University's School of Computer Science (SCS), the CyLab Security and Privacy Institute, and the Center for AI Safety in San Francisco have uncovered a new vulnerability, proposing a simple and effective attack method that causes aligned language models to generate objectionable behaviors at a high success rate.
In their latest study, "Universal and Transferable Adversarial Attacks on Aligned Language Models," CMU Associate Professors Matt Fredrikson and Zico Kolter, Ph.D. Student Andy Zou, and alumnus Zifan Wang found a suffix that, when attached to a wide range of queries, significantly increases the likelihood that both open- and closed-source LLMs will produce affirmative responses to queries that they would otherwise refuse. Rather than relying on manual engineering, their approach automatically produces these adversarial suffixes through a combination of greedy and gradient-based search techniques.
"At the moment, the direct harms to people that could be brought about by prompting a chatbot to produce objectionable or toxic content may not be especially severe," said Fredrikson. "The concern is that these models will play a larger role in autonomous systems that operate without human supervision. As autonomous systems become more of a reality, it will be very important to ensure that we have a reliable way to stop them from being hijacked by attacks like these."
In 2020, Fredrikson and fellow researchers from CyLab and the Software Engineering Institute discovered vulnerabilities within image classifiers, AI-based deep-learning models that automatically identify the subject of photos. By making minor changes to the images, the researchers could alter how the classifiers viewed and labeled them.
Using similar methods, Fredrikson, Kolter, Zou, and Wang successfully attacked Meta's open-source chatbot, tricking the LLM into generating objectionable content. While discussing their finding, Wang decided to try the attack on ChatGPT, a much larger and more sophisticated LLM. To their surprise, it worked.
"We didn't set out to attack proprietary large language models and chatbots," Fredrikson said. "But our research shows that even if you have a big trillion parameter closed-source model, people can still attack it by looking at freely available, smaller and simpler open-sourced models and learning how to attack those."
By training the attack suffix on multiple prompts and models, the researchers have also induced objectionable content in public interfaces like Google Bard and Claud and in open-source LLMs such as Llama 2 Chat, Pythia, Falcon and others.
"Right now, we simply don't have a convincing way to stop this from happening, so the next step is to figure out how to fix these models," Fredrikson said.
Similar attacks have existed for a decade on different types of machine learning classifiers, such as in computer vision. While these attacks still pose a challenge, many of the proposed defenses build directly on top of the attacks themselves.
"Understanding how to mount these attacks is often the first step in developing a strong defense," he said.
More information: Universal and Transferable Adversarial Attacks on Aligned Language Models. Llm-attacks.Org/zou2023universal.Pdf
Citation: Researchers discover new vulnerability in large language models (2023, July 31) retrieved 31 July 2023 from https://techxplore.Com/news/2023-07-vulnerability-large-language.Html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Creating The Next Wave Of Computing Beyond Large Language Models
Presented by VAST Data
With access to just a sliver of the 2.5 quintillion bytes of data created every day, AI produces what often seem like miracles that human intellect can't match — identifying cancer on a medical scan, a viable embryo for IVF, new ways of tackling climate change and the opioid crisis and on and on. However, that's not true intelligence; rather, these AI systems are just designed to link data points and report conclusions, to power increasingly disruptive automation across industries.
While generative AI is trending and GPT models have taken the world by storm with their astonishing capabilities to respond to human prompts, do they truly acquire the ability to perform reasoning tasks that humans find easy to execute? It's important to understand that the current AI the world is working with has little understanding of the world it exists in, and is unable to build a mental model that goes beyond regurgitating information that is already known.
Yann LeCun, AI Chief at Meta, recently said that current artificial intelligence systems like ChatGPT "are not even as smart as a dog," though the limited reasoning abilities of large language models (LLMs) are offset by their large associative memory capacity. This makes them "a bit like students who have learned the material by rote but haven't really built deep mental models of the underlying reality."
So, for all the hype, generative AI as we know it is only the beginning of the deep learning and automated discovery era. We are now just starting to see a glimmer of something that is greater than the ability to correlate and generate data when using simple language models, says Jeff Denworth, co-founder at VAST Data.
"An AI that exists beyond the automation of routine tasks will be marked by machines that can understand the natural world—that can reason about that natural world," he says, "and it will create mental models that will serve as the basis for entirely new discoveries."
He points to AlphaDev: the artificial intelligence (AI) system built by Google DeepMind that recently uncovered brand-new sorting algorithms that are up to 70% faster for shorter sorting sequences and about 1.7% faster for large ones, blowing away the algorithms that data scientists and engineers have been fine-tuning for decades.
"That's very different from asking a chatbot what the diameter of the earth is," he adds. "Those are things that we know. But what you're starting to see is that computers are starting to discover things that we don't know."
We're on the cusp of what he calls "AI-automated discovery," or the potential to evolve AI from LLMs, which are currently limited to performing routine tasks, like business reporting or collating and synthesizing known information, into data-driven triggers where AI is autonomously seeking answers to questions unprompted by humans as new, natural, rich data enters a dataset.
Unlocking brand-new knowledge at lightning speedHumans can take 20 years to become domain specialists, and then apply that thinking toward solving real problems. That specialization can be achieved by an AI computer today in a matter of minutes or seconds.
A thousand data centers around the world all working on the same problem, each with trillions of cores and hundreds of petabytes or exabytes of data, can become a global computer, playing through scenarios in simulations at internet speeds—advancing the process of how we learn by light years, and making discoveries faster than humans will ever be capable of on their own. This kind of data-driven and event-driven automation expedites AI discovery for the types of use cases that impact all of humanity—and even expands the possibilities of discovery in areas uncharted or even not yet imagined by humans to date.
"Imagine these machines tackling crop production, new approaches to sustainable energy, the elimination of disease," Denworth says. "We think that these machines will find and discover whole new domains of science and mathematics on their own that are beyond our current evolutionary step."
But much has to change to make it happen, he adds. This new paradigm will require a brand-new way of approaching AI infrastructure.
Building a central corpus of the world's dataThis future of computing requires what Denworth refers to as a thinking machine (a nod to the 1980s parallel computing company), and will require us to embrace several new computing paradigms, from the nature of data structures to the nature of computing on data. And it will require a way to simplify and automate the process of implementing AI.
"It's easy to say we have a lot of data and a lot of machines, and therefore we're ready," he explains. "But the hard job is bringing it all together, so that the machines can see and share the data, particularly when organizations deal with things like data gravity and data privacy. You need to build new approaches to extend our understanding of data on a global scale and to create a form of anti-gravity for data and data processors."
The concept of a data platform also needs to change. Today's leading data platform providers are largely integrating machine learning solutions upon systems that were fundamentally designed for business reporting, but numbers and tables are not the data constructs most humans use to interact with the world.
"Sight, sound, touch, smell and taste – these are the senses that humans use to perceive the natural world, and by synthesizing the real-time data that comes from these sensors with our neural networks we develop understandings and realizations," he says. "We want to build a computer that acts like that, a system that understands data (not in tables) but one that creates structure and understanding from the rich natural data that comes to us from all over the world."
Once this richer class of data gets fed into a thinking system, such a machine instantly and innately starts doing things like interpreting, correlating, and building new realizations upon this data, so that it's just perpetually getting smarter about what's happening around it, rather than just being prompted to process and learn upon human request.
In order to give AI systems the greatest chance of creating discoveries, we must put data at the center of the system as a knowledge store and an experience trigger, where each data event becomes an expansion against our past learnings that in turn create new understandings.
"If you can give training models access to the world's data and the world's processors and provide mechanisms for organization and processing, then we should be able to reduce the time it takes for us to achieve new discoveries," he says. "At that point, machines won't just assist humans to achieve new discoveries—these systems will allow us to advance the rate of discovery from generational cycles to processor clock cycles."
The need for an unstructured databaseThis future depends on a next-generation approach to data management and database architecture, however, to lay the foundation for intelligent computers to collect, process and collaborate on data at a global scale in one unified computing environment.
"The reality is that the next era of deep learning requires an integrated solution designed for the imperatives of tomorrow," Denworth says.
But here in the present, data-driven companies are launching increasingly sophisticated AI initiatives, and today's data management constructs cannot easily deal with the multi-variant types of data these AI initiatives ingest. Organizations are forced to stitch together databases, data warehouses, data lakes, file systems and streaming platforms to make sense of this data deluge. What connects these systems are APIs, which often work with each other according to some lowest common denominator. VAST Data is simplifying the data management experience by breaking the tradeoffs that have resulted in this soup of infrastructure, and then by rethinking the relationship between structured data and unstructured data at the fundamental level.
"Unstructured data, GPUs, global data sets, a variety of on–premises and cloud computers — these are the hallmarks of the environments that are being deployed by leaders in deep learning," Denworth says. "The biggest hyperscale organizations have been building infrastructure for decades, but this has been the property only of the computing elite. On August 1, VAST will take organizations to a new place where these systems won't be built from independent technologies that have been designed upon legacy concepts. With a full rethink, we can democratize systems of AI-automated discovery for everyone."
For a deep dive into VAST's vision for the future of AI infrastructure, plus a look at how customers like Pixar, Zoom and The Allen Institute and partners like NVIDIA are harnessing this powerful new approach to deep learning, don't miss VAST's Build Beyond event on August 1st.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. For more information, contact sales@venturebeat.Com.
Software Development Trends In The End Of 2023
Share
Share
Share
As we approach the end of 2023, the software development landscape continues to evolve rapidly, shaped by emerging technologies and changing market demands. This article explores the top software development trends that are expected to make a significant impact in the coming months. From low-code platforms to quantum computing, the future of software development promises exciting innovations and opportunities for businesses and developers alike.
The Rise of Low-Code and No-Code PlatformsIn recent times, low-code and no-code platforms have garnered remarkable traction within the software development community. These platforms empower developers and even non-technical users to craft applications with minimal coding endeavours. By leveraging visual interfaces and pre-built components, software development attains heightened accessibility and efficiency. Consequently, software development firms are embracing these platforms to expedite their development processes and reduce reliance on traditional coding methodologies.
Artificial Intelligence and Machine Learning IntegrationArtificial Intelligence (AI) and Machine Learning (ML) are gradually cementing their positions as integral components of software applications. AI-powered attributes such as natural language processing, image recognition, and recommendation systems augment user experiences. Meanwhile, ML algorithms facilitate applications to learn from user interactions, yielding more personalized and context-aware solutions. In the year 2023, AI and ML are slated to permeate diverse software domains encompassing healthcare, finance, marketing, and customer support.
Blockchain Technology Revolutionizing Software DevelopmentBeyond its association with cryptocurrencies, blockchain technology bears far-reaching implications for software development. Its decentralized and immutable nature bolsters data security and transparency across an array of applications, including supply chain management, voting systems, and digital identity verification. As an increasing number of industries recognize its potential, blockchain integration will evolve into a standard feature of modern software.
Progressive Web Apps (PWAs) for Enhanced User ExperienceWeb Apps (PWAs) have been progressively gaining momentum due to their seamless capacity to offer an optimized user experience across various devices and platforms. By fusing the finest elements of web and mobile applications, PWAs boast rapid loading times, offline functionality, and push notifications. With notable corporations embracing PWAs, this trend is poised to experience continued growth throughout the culmination of 2023.
Cybersecurity Becoming a Top PriorityIn tandem with technological advancements, cybersecurity threats continue to undergo evolution. As 2023 approaches, businesses and developers alike will prioritize cybersecurity. The surge in cyberattacks and data breaches underlines the urgency of robust security measures throughout the software development life cycle. From implementing secure coding practices to conducting regular audits and penetration testing, a comprehensive approach to cybersecurity will be of paramount importance.
The Emergence of Extended Reality (XR) ApplicationsExtended Reality (XR), comprising Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), is set to instigate disruption across diverse industries. Within the realm of software development, XR applications will find relevance in fields such as education, training, entertainment, and real estate. By blurring the boundaries between the physical and digital realms, XR engenders immersive and captivating experiences.
Cloud Computing Continues to DominateCloud computing has proven transformative for software development, bestowing scalability, flexibility, and cost-efficiency. As the demand for cloud services escalates, cloud providers will persistently innovate, offering advanced tools and services. By the close of 2023, a greater number of businesses will hinge on their software infrastructure on the cloud, enabling seamless global collaborations.
Internet of Things (IoT) and Smart DevicesThe Internet of Things (IoT) has unfurled a realm of opportunities for interconnected devices and intelligent applications. From smart homes and wearables to industrial automation, IoT integration will metamorphose our interaction with technology. Software developers will play a pivotal role in crafting robust IoT applications that collect, analyze, and act upon data from interconnected devices.
Increased Focus on Accessibility and InclusivityInclusive design is gaining momentum within the software development community. Developers are now cognizant of the significance of devising products that cater to individuals with disabilities and diverse needs. By the culmination of 2023, accessibility features will be seamlessly woven into mainstream software, ensuring that all individuals can partake in technological advancements.
Collaborative Development and Remote TeamsThe pandemic expedited the paradigm shift towards remote work and collaborative development. As teams assume more geographically dispersed configurations, collaboration tools and project management platforms shall evolve to facilitate seamless communication and bolster productivity. Developers will adapt to the remote work culture, transcending geographical barriers to access talent from all corners of the globe.
Quantum Computing and its Impact on Software DevelopmentQuantum computing, an avant-garde technology, possesses the potential to revolutionize software development. Though still in its nascent stages, quantum computing holds the promise of resolving intricate problems that lie beyond the purview of classical computers. As research progresses, quantum algorithms and applications will indelibly mark the landscape of the software industry.
Emphasis on Green Software and Sustainable PracticesEnvironmental concerns exert a palpable influence on software development practices. The industry is espousing greener approaches to mitigate energy consumption and reduce carbon footprints. Software developers are striving to optimize code and engineer eco-friendly applications, aligning with the tenets of sustainability.
Voice User Interface (VUI) and Natural Language Processing (NLP)Voice User Interface (VUI) and Natural Language Processing (NLP) are revolutionizing user interactions with software. With advances in speech recognition and language comprehension, VUI and NLP will be seamlessly incorporated into various applications, rendering them more intuitive and user-friendly.
ConclusionAs we gaze towards the denouement of 2023, the landscape of software development stands defined by innovation, inclusivity, and security. Emerging technologies such as AI, blockchain, XR, and quantum computing are poised to redefine the horizons of software development. Every software development company will be steadfast in crafting sustainable, accessible, and captivating software that enriches the lives of users worldwide.
Comments
Post a Comment