Findability Sciences Launches Findability.Inside to Help Traditional ...
New AI Security Guidelines Published By NCSC, CISA & More International Agencies
The U.K.'s National Cyber Security Centre, the U.S.'s Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries have released new guidelines on the security of artificial intelligence systems.
The Guidelines for Secure AI System Development are designed to guide developers in particular through the design, development, deployment and operation of AI systems and ensure that security remains a core component throughout their life cycle. However, other stakeholders in AI projects should find this information helpful, too.
These guidelines have been published soon after world leaders committed to the safe and responsible development of artificial intelligence at the AI Safety Summit in early November.
Jump to:
At a glance: The Guidelines for Secure AI System DevelopmentThe Guidelines for Secure AI System Development set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – "function as intended, are available when needed and work without revealing sensitive data to unauthorized parties."
SEE: Hiring kit: Prompt engineer (TechRepublic Premium)
Key to this is the "secure by default" approach advocated by the NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. Principles of these frameworks include:
A combined 21 agencies and ministries from a total of 18 countries have confirmed they will endorse and co-seal the new guidelines, according to the NCSC. This includes the National Security Agency and the Federal Bureau of Investigations in the U.S., as well as the Canadian Centre for Cyber Security, the French Cybersecurity Agency, Germany's Federal Office for Information Security, the Cyber Security Agency of Singapore and Japan's National Center of Incident Readiness and Strategy for Cybersecurity.
Lindy Cameron, chief executive officer of the NCSC, said in a press release: "We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout."
Securing the four key stages of the AI development life cycleThe Guidelines for Secure AI System Development are structured into four sections, each corresponding to different stages of the AI system development life cycle: secure design, secure development, secure deployment and secure operation and maintenance.
The guidelines are applicable to all types of AI systems, and not just the "frontier" models that were heavily discussed during the AI Safety Summit hosted in the U.K. On Nov. 1-2, 2023. The guidelines are also applicable to all professionals working in and around artificial intelligence, including developers, data scientists, managers, decision-makers and other AI "risk owners."
"We've aimed the guidelines primarily at providers of AI systems who are using models hosted by an organization (or are using external APIs), but we urge all stakeholders…to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems," the NCSC said.
The Guidelines for Secure AI System Development align with the G7 Hiroshima AI Process published at the end of October 2023, as well as the U.S.'s Voluntary AI Commitments and the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence.
Together, these guidelines signify a growing recognition amongst world leaders of the importance of identifying and mitigating the risks posed by artificial intelligence, particularly following the explosive growth of generative AI.
Building on the outcomes of the AI Safety SummitDuring the AI Safety Summit, held at the historic site of Bletchley Park in Buckinghamshire, England, representatives from 28 countries signed the Bletchley Declaration on AI safety, which underlines the importance of designing and deploying AI systems safely and responsibly, with an emphasis on collaboration and transparency.
More must-read AI coverageThe declaration acknowledges the need to address the risks associated with cutting-edge AI models, particularly in sectors like cybersecurity and biotechnology, and advocates for enhanced international collaboration to ensure the safe, ethical and beneficial use of AI.
Michelle Donelan, the U.K. Science and technology secretary, said the newly published guidelines would "put cybersecurity at the heart of AI development" from inception to deployment.
"Just weeks after we brought world-leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort," Donelan said in the NCSC press release.
"In doing so, we are driving forward in our mission to harness this decade-defining technology and seize its potential to transform our NHS, revolutionize our public services and create the new, high-skilled, high-paid jobs of the future."
Reactions to these AI guidelines from the cybersecurity industryThe publication of the AI guidelines has been welcomed by cybersecurity experts and analysts.
Toby Lewis, global head of threat analysis at Darktrace, called the guidance "a welcome blueprint" for safety and trustworthy artificial intelligence systems.
Commenting via email, Lewis said: "I'm glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task. Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we'll realize the benefits of AI faster and for more people."
Meanwhile, Georges Anidjar, Southern Europe vice president at Informatica, said the publication of the guidelines marked "a significant step towards addressing the cybersecurity challenges inherent in this rapidly evolving field."
Anidjar said in a statement received via email: "This international commitment acknowledges the critical intersection between AI and data security, reinforcing the need for a comprehensive and responsible approach to both technological innovation and safeguarding sensitive information. It is encouraging to see global recognition of the importance of instilling security measures at the core of AI development, fostering a safer digital landscape for businesses and individuals alike."
He added: "Building security into AI systems from their inception resonates deeply with the principles of secure data management. As organizations increasingly harness the power of AI, it is imperative the data underpinning these systems is handled with the utmost security and integrity."
Amazon's New AI Image Generator Creates An 'Invisible Watermark' On Every Deepfake
How hard is it to check whether an image is an AI deepfake or not? Well, Amazon claims it has some sort of answer with its own watermarking system for Titan, the new AI art generator it announced today. Of course, it's not that easy. Each business that makes use of the model will have to outline how users can check if that image they found online was actually just another AI deepfake.
"Even AI Rappers are Harassed by Police"AI Unlocked
Like It or Not, Your Doctor Will Use AIAI Unlocked
November 14, 2023
Teenage Engineering's TP-7 Is the Ferrari of Tape Recorders
November 17, 2023
Amazon did not offer many—or really any—details about Titan's capabilities, let alone its training data. Amazon Web Services VP for machine learning Swami Sivasubramanian told attendees at the company's re:Invent conference it can create a "realistic" image of an iguana, change the background of the image, and use a generative fill-type tool to expand the image's borders. It's nothing we haven't seen before, but the model is restricted to AWS customers through the Bedrock platform. It's a foundational model for businesses to integrate into their own platforms.
The cloud service-providing giant is like many other companies putting the onus on individual users to figure out whether an image was created by AI or not. The company said this new AI image generator will put an "invisible watermark" on every image created through the model. This is "designed to help reduce the spread of misinformation by providing a discreet mechanism to identify AI-generated images."
Just what kind of watermark that is, or how outside users can identify the image, isn't immediately clear. Other companies like Google DeepMind have claimed they have found ways to disturb the pixels of an AI-generated image in order to create an unalterable watermark. Gizmodo reached out to Amazon for clarification, but we did not immediately hear back.
AWS VP of generative AI Vasi Philomin told The Verge that the watermark doesn't impact the image quality, though it can't be "cropped or compressed out." It's also not a mere metadata tag, apparently. Adobe uses metadata tags to signify whether an image created with its Firefly model is AI or not. It requires that users go to a separate site in order to find if an image contains the metadata tag or not.
In order to figure out if an image is AI, users will need to connect to a separate API. It will be up to each individual company that makes use of the model to tell users how to access that AI scanning tech.
Simply put, it's going to be incredibly annoying to figure out which images are AI, especially considering every company is creating a separate watermarking system for their AI.
Amazon's AI models may be the most closed-off from any major company released so far. On Tuesday, the company revealed "Q," a customizable chatbot made for businesses, though we have no ideas about its underlying models or training data. In a blog post, the leading AI chip maker Nvidia said Amazon used its NeMo framework for training the new model. NeMo includes pre-trained models meant to speed up the development of new AI, but that doesn't offer many hints about just what kind of content went into the new AI art generator.
There's a reason why the company wouldn't want to talk much about what went into this AI. There's been a host of artistic types who have criticized and even sued other companies, alleging those making AI art generators used their work without permission for AI training. Amazon has already promised to shield companies who use the tech giant's language model coding AI Code Whisperer if anybody tries to sue. On Wednesday, Sivasubramanian told attendees that the indemnification policy would also apply to the Titan model.
AWS claims the Titan model has "built-in support for the responsible use of AI by detecting and removing harmful content from the data, rejecting inappropriate user inputs, and filtering model outputs." Just what that means is up in the air without getting the chance to test it. As with all the AI announced at its re:Invent conference, the model is locked up tight except for paying enterprise customers.
AI Is A Critical Building Block For Spatial Computing
AI's critical role in spatial computing will force business professionals to start thinking about ... [+] spatial computing in the context of today's AI revolution.
Getty ImagesWe are at the cusp of a new computing paradigm. It's called spatial computing. Spatial computing is when physical and virtual objects converge seamlessly and it's made possible with artificial intelligence. This has profound implications for how we interact with technology and with each other. Spatial computing opens up endless possibilities for creativity, innovation, human connectivity, and new ways to work. It removes barriers, closes distances, and enables co-presence. Spatial Computing will make the devices we use and how we use them blend into the daily natural flow and patterns of how we live our lives.
To appreciate the business value of Spatial Computing, we first have to create a working definition for the business world and explain the market opportunities it will enable. Once we unlock that, we can understand how business and computing will change in order to prepare for this transformation.
As I wrote in the Harvard Business Review, "Spatial computing is an evolving form of computing that blends our physical world and virtual experiences using a wide range of technologies, thus enabling humans to interact and communicate in new ways with each other and with machines, as well as giving machines the capabilities to navigate and understand our physical environment in new ways. From a business perspective… it will expand computing into everything you can see, touch, and know."
Spatial Computing is an evolving 3D-centric form of computing that, at its core, uses AI, Computer Vision, and extended reality to blend virtual experiences into the physical world that breaks free from screens and makes all surfaces spatial interfaces. It allows humans, devices, computers, robots, and virtual beings to navigate via computing in 3D space. It ushers in a new paradigm for human-to-human interaction as well as human-computer interaction. These new interactions will enhance how we visualize, simulate, and interact with data in physical or virtual locations. It will expand computing beyond the confines of the screen into everything you can see, experience, and know.
Spatial Computing allows us to navigate the world alongside robots, drones, cars, virtual assistants, and beyond. It's not limited to just one technology or device. It is a mix of software, hardware, and information that allows humans and technology to connect in new ways ushering in a new form of computing that could be even more impactful than personal computing and mobile computing have been to society.
Spatial computing brings digital information and experiences into a physical environment. It takes into account the position, orientation, and context of the wearer, as well as the objects and surfaces around it. It uses a new, advanced type of computing to understand the physical world in relation to virtual environments and the wearer. It does this by using emerging interfaces like wearable headsets that have cameras, scanners, microphones, and other sensors built into the device. New interfaces come in the form of shopping to work and play. The world around us will talk to us in new ways via spatial computing. Spatial computing enables advanced gestural recognition (like recognizing our hand motions and applying them as commands) and it will have better-than-4K resolution images for each eye.
Spatial computing uses information about the environment around it to act in a way that's most intuitive for the person using it. How businesses digitally transform using spatial computing will set them apart from the competition and set them up for success for generations who grow up in an increasingly blended virtual and physical world.
Spatial Computing Is Not Just About AR And VRTo many, spatial computing may not seem different from virtual reality or augmented reality. Augmented reality is overlaying digital content into a physical space. Virtual reality is a completely immersive virtual environment. The extended reality (XR) spectrum is part of spatial computing, but it's not its only enabling technology. Artificial intelligence is on everyone's mind, along with XR, sensors, IoT, and new levels of connectivity. AI is one of the most important underlying technologies that will bring spatial computing to the masses.
Spatial computing is a mix of hardware and software that enables machines to understand our physical environment without us telling it so. That enables us to create content, products, and services that have a purpose in both physical and virtual environments.
The future of spatial computing is poised for substantial growth, driven by key advancements. These include radical progress in optics, the miniaturization of sensors and chips, the ability to authentically portray 3D images, and the continuous evolution of spatial computing hardware and software. These innovations, supported by significant breakthroughs in AI, will make spatial computing increasingly compelling for businesses on a grand scale in the years to come.
AI's Critical Role In Spatial ComputingArtificial intelligence has a critical role in spatial computing. There are applications personal to the wearer and objects in the spatial environment. We won't type to AI. We'll talk to it. AI capabilities like large language models (LLMs) and deep neural networks (DNNs) allow people to act and think like humans as they interact with computers (instead of thinking like a database). Amazon's Alexa Smart glasses say it all: let "voice control your world - hands-free." AI's critical role in spatial computing will force business professionals to start thinking about spatial computing in the context of today's AI revolution.
Here are some of the ways AI is a critical building block for spatial computing.
AI recognizes our hand gestures and body language.AI algorithms, like computer vision models, improve the accuracy and speed of object recognition and tracking. This is crucial for understanding our hand gestures to interact with the digital environment. AI can also identify and track objects and surfaces, which makes spatial computing experiences seamless and interactive. Spatial computing uses cameras, LiDAR, GPS, and sensors to capture the position, orientation, and movement of objects in an environment. Spatial computing allows us to place and manipulate virtual objects in ways that correspond to the virtual world.
Object and scene generation.People will use spatial computing and AI together to generate content for digital spaces. Just like content creators today use their cell phones to create online content, so too will they use their wearable spatial devices to create 3D content. From games to spatial video and audio, creators will use AI to develop personalized scenes, objects, and entertainment for their fans.
Conversational navigation.Human beings are social animals. We thrive around other humans. While some argue that no one would wear a device on their faces, similarly, many doubted we would ever use the internet or mobile phones. Spatial computing will solve communication challenges as simple as where to find a product at your favorite store or share your child's birthday party with relatives far away. There is a sense of volumetric presence. It's more than the sense of someone being there on a video call. It's as if they're sitting next to you. A 3D hologram you could reach out and touch,
Spatial computing will also serve as the medium to engage with AI.
AI takes data and generates new data. For DeepMind and Inflection AI founder Mustafa Suleyman, the next phase of artificial intelligence will be interactive AI. That begs the question of how and where we will engage with this interactive AI. That's where spatial computing comes in. Spatial computing has become the medium through which we engage with AI in a more human way. We will navigate the future of computing most of the time either through gestural recognition (which uses AI) or through conversational AI. While we might still type on virtual keyboards, it might not be convenient for simple day-to-day tasks.
Some foreshadowing of this change was seen during Apple's Apple Event, where the new iPhone 15 and new Apple Watch were unveiled. Apple is conditioning the market to use neural interfaces to engage with technology through a natural double-tap motion on the Apple Watch.
This was a big step for Amazon as well. They recently announced their new Alexa LLM and Echo frames rooted in style and in a more robust conversational AI interface. Amazon pushes the hands-free narrative by talking to Alexa to make calls, play an audiobook, or playlist. Talk to your Amazon glasses to check that the doors are locked, lights are set, and even adjust the thermostat with your voice.
Spatial navigation for autonomous vehicles, drones, and robots.Spatial computing isn't just for humans. Objects like autonomous vehicles, drones, and robots need to know how to navigate the physical world. Spatial computing, along with AI, helps robots "see" and interpret the world around them. Whether it's knowing when to stop or go at a stoplight with pedestrians or orienting itself through a manufacturing facility, spatial computing with AI will guide robots through our world.
In some ways, the spatial computer hardware becomes the interface through which humans are able to interact with AI in a more human-centered way, away from the screen and bringing it into our physical world. Through spatial computing, we will engage and see AI in the form of delivery robots, autonomous vehicles, drones, humanoid robots, or virtual beings and assistants navigating the world around us.
Spatial Computing Is A Key Step in The AI-Driven Business RevolutionThrough AI, IoT, sensors, and more, spatial computing enables the creation of larger connected ecosystems that seamlessly integrate virtualization and data in many forms with our physical world and in front of our eyes, no matter where we are. In some ways, it helps elevate or augment the way we experience the physical world. This, in turn, will create new experiences, new ways to do business, and new utilities that can make spatial computing a valuable tool for innovation and business transformation.
Comments
Post a Comment