Machine Learning in Healthcare - 2023 Guide
Cybersecurity Implications Of The First-Ever U.S. National Security Memorandum On Artificial Intelligence
On Oct. 24, 2024, the White House issued the first-ever National Security Memorandum on Artificial Intelligence (AI), which outlines a comprehensive strategy for harnessing AI to fulfill U.S. National security needs while prioritizing its safety, security, and trustworthiness. This directive also aims to maintain U.S. Leadership in advancing international consensus and governance around AI, building on progress made over the past year at the United Nations, as well as the AI Safety Summits in Bletchley and Seoul. Most notably, this memorandum directly fulfills the obligation to offer further direction for AI use in national security systems, as defined in subsection 4.8 of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI released last October.
The directive underscores the need to balance responsible AI use with flexibility, ensuring its potential is not unduly limited—particularly in high-stakes national security applications. While the memorandum holds broader implications for AI governance, the following cybersecurity-related measures are particularly noteworthy and essential to advancing AI resilience in national security applications:
1. Establish a Comprehensive Framework to Advance AI Governance and Risk Management in National SecurityA central pillar of this memorandum is the introduction of the Framework to Advance AI Governance and Risk Management in National Security. This accompanying framework, paralleling the Office of Management and Budget's earlier memorandum on Advancing the Responsible Acquisition of AI in Government, provides a structured, comprehensive approach to managing the layered risks associated with AI use. For example, the framework mandates the continued testing, monitoring, and evaluation of AI systems, ensuring vulnerability assessments and security compliance throughout the AI lifecycle. The framework also requires robust data management standards, including the secure handling, documentation, and retention of AI models, alongside standardized practices for data quality assessment post-deployment.
Crucially, the framework offers targeted guidance for determining prohibited AI use and managing "high-impact" AI systems. This approach ensures that agencies employ stringent and holistic risk management practices, especially when deploying AI applications that significantly impact U.S. National security.
2. Safeguard AI System Security and Integrity from Foreign Interference Risks and Cyber ThreatsRecognizing that foreign adversaries are increasingly targeting AI innovations to advance their own national objectives, the memorandum tasks the National Security Council and the Office of the Director of National Intelligence (ODNI) with reviewing national intelligence priorities to improve the identification and assessment of foreign intelligence threats targeting the U.S. AI ecosystem (Section 3.2(b)(i)). Moreover, the ODNI, in coordination with the Department of Defense (DOD), Department of Justice, and other agencies, are responsible for identifying critical nodes in the AI supply chain that could be disrupted or compromised by foreign actors, ensuring that proactive and coordinated measures are in place to mitigate such risks (Section 3.2(b)(ii)). To mitigate the risk of gray-zone methods, the Committee on Foreign Investment in the United States is also directed to assess whether foreign access to U.S. AI proprietary information poses a security threat, providing a regulatory mechanism to ban harmful transactions (Section 3.2(d)(i)).
Notably, the Artificial Intelligence Safety Institute (AISI) assumes expanded responsibilities to advance AI resilience. In particular, AISI is tasked with issuing specialized guidance to AI developers on managing safety, security, and trustworthiness risks in dual-use models; establishing benchmarks for AI capability evaluations; and serving as the primary conduit for communicating risk mitigation recommendations (Section 3.3(e)). Through these combined efforts to detect, assess, and block supply chain risks, the United States reinforces its commitment to protecting its technological edge and leadership.
3. Leverage AI's Potential in Offensive and Defensive U.S. Cyber OperationsTo harness AI's potential to enhance both offensive and defensive U.S. Cyber operations, the memorandum tasks the Department of Energy (DOE) with launching a pilot project to evaluate the performance and efficiency of federated AI and data sources, which are essential for frontier AI-scale training, fine-tuning, and inference (Section 3.1(e)(iii)). This project aims to refine AI capabilities that could improve cyber threat detection, response, and offensive operations against potential adversaries, aligning with the findings presented in the Senate's roadmap for AI policy.
Additionally, where appropriate, the Department of Homeland Security (DHS), Federal Bureau of Investigation, National Security Agency, and DOD are tasked with publishing unclassified guidance on known AI cybersecurity vulnerabilities, threats, and best practices for avoiding, detecting, and mitigating these risks during AI model training and deployment (Section 3.3(h)(ii)). This guidance is also expected to cover the integration of AI into other software systems, contributing to the secure deployment of AI in operational settings. Together, these actions have the potential to strengthen the United States' ability to leverage AI for cyber operations, helping to maintain a decisive technological advantage over adversaries who are actively seeking to use AI to undermine our security.
4. Secure AI in Critical InfrastructureThe memorandum also underscores the importance of securing AI within U.S. Critical infrastructure, recognizing the risks AI can pose in sensitive sectors, including nuclear, biological, and chemical environments. In collaboration with the National Nuclear Security Administration and other agencies, the DOE is tasked with developing infrastructure capable of systematically testing AI models to assess their potential to generate or exacerbate nuclear and radiological risks (Section 3.3(f)(iii)). This initiative includes maintaining classified and unclassified testing capabilities, incorporating red-teaming exercises, and ensuring the secure transfer and evaluation of AI models.
Furthermore, the memorandum requires the DOE, along with the DHS and other agencies, to develop a roadmap for classified evaluations of AI's potential to create new or amplify existing chemical and biological threats, ensuring rigorous testing, and proactively safeguarding sensitive information (Section 3.3(g)(i)). Through these efforts, the memorandum aims to protect the United States' critical infrastructure from emerging AI-related vulnerabilities, ensuring resilience against both unintentional risks and deliberate attacks.
5. Attract, Build, and Retain a Top-Tier AI WorkforceThe memorandum underscores the critical importance of cultivating and retaining a robust AI talent pipeline to maintain expertise vital to national security—a long-standing struggle, especially in cybersecurity, where the administration has already launched targeted hiring initiatives to close talent gaps. For instance, Sections 3.1(c)(i) and 4.1(c) outline provisions to attract international AI experts, including expediting visa processes and addressing hiring hurdles. Specifically, the DOD, Department of State, and DHS are instructed to revise hiring policies to ensure they attract AI-related technical talent and align with national security missions. This includes offering expedited security clearances and scholarship programs aimed at building technical expertise within the government.
These workforce initiatives also align with findings from the Senate AI Insight Forums, which stressed the need to provide pathways for international students and entrepreneurs to remain in the United States post-education, leveraging tax incentives, and strong protections for patents and intellectual property to foster innovation.
Looking Ahead:In light of the rapid pace at which foreign adversaries aim to deploy AI to usurp U.S. Technological leadership, military advantage, and international influence, the release of this highly anticipated memorandum marks a significant and strategic milestone in AI governance and cybersecurity. By aligning ambitious AI innovation and integration goals with targeted cybersecurity and national security guidance, the memorandum aims for a balanced approach that seeks to avoid the dangers of self-imposed barriers, such as overregulation and bureaucratic delays, while preserving the nation's technological edge.
Highly responsive to insights from recent forums and working groups, the memorandum signals an ongoing commitment to refining AI governance through collaboration and cutting-edge research. However, as AI technology and global threats evolve, regular reassessment will be essential to preserve the memorandum's balance between fostering innovation, swift integration, and safeguarding national security interests. Sustaining this momentum will be imperative to fully realizing the objectives set forth in this memorandum.
The Flawed Reality Of Artificial Intelligence
Artificial intelligence (AI) is quite the rage in DC right now.
Part of it is genuine concern over a still mysterious technology that seems to be moving quickly into various aspects of our lives. No one wants a repeat of the uncontrolled Internet which went from "The Global Public Square" to The Global Public Battlefield.
Part of it, also, is the over hype of techno-bro's who once again are religious in their belief that all their new tech is the next saving grace of humanity. Of course, the zeal is reinforced by the flipping of said new AI tech company into a few billion bucks. AI clearly has not changed that most human emotion - greed remains profitable
And the last part is the ignorance of the DC political class when it comes to tech issues - so much for a group of poli sci majors making sense of complex science. To say many are easily taken in by the hype would be kind.
Fortunately, a few on the Hill, at the Pentagon, and the White House seem to recognize AI is important, but flawed; supplying at least some reality check.
The Polluted LakeThe AI miracle in our ever expanding information cyber universe is the ability to take large amounts of data and make "sense of it." And the software algorithms of AI allowing a speedy racking and stacking this huge amount of information from the cyber space "data lake" are also supposed to learn from each dive into it.
In my opinion -- Nice ideas. Flawed concepts.
Continuing the lake analogy, much depends on where you are dipping into the data lake. This information body is polluted with all kinds of truths, half-truths, and plain lies.
And thus, learning also depends on flawed water - easily polluted by a myriad of players with the pollution being a part of the algorithm's learning - a perpetuity of bad information.
The Plague Of The Spewing PolluterSupreme among our opponents on the world stage, the Russians love polluting the water to their benefit. You don't run a dictatorship for 500 years and not learn to control information. And they ramped up the skill during the life and death struggle of the Cold War. Propaganda as power projection.
And in the 21st Century, the Old KGB propaganda fan, Vladimir Putin, and his 24/7 working Cyber space chat bots provide a power in this new domain far beyond any capability Russia has on land, sea, air, and space. A cheap projection of power.
The Bias Of The AlgorithmsEven with care of where you dip into the data lake, remember who is drawing up the algorithms guiding AI. The vast majority of them are middle to upper middle class males between 25 and 35. And dare I add overwhelmingly White.
I am not accuse them of deliberate prejudice. But I am acknowledging they are going to have a frame of reference for life and experience as we all do. Theirs is one of that does not necessarily reflect the true demographics and experience of the real world.
We've already seen this in AI driven analysis of potential crime and the exaggerated focus on minorities never mind a bias toward the role of women.
Back To RealitySo where does this leave us. It leaves it as such technology always does - with another interesting tool to use. But, a tool with fundamental flaws of which we must be careful.
In the final analysis, the best tool for the use and determination of AI sits between our ears. The human brain too has its biases and challenges. But, it remains far more adaptable and willing to adapt than any AI so far - and despite the current AI hype, likely to well into the future.
Ronald A. Marks is a former CIA and Capitol Hill staffer and IT Executive. A member of the Council on Foreign Relations and a Visiting Professor at George Mason University, the Schar School of Policy and Government where he teaches about cyber space and emerging technologies.
Image: anyaberkut
You Might Also Read:
Nowhere To Run:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
Cyber Security Intelligence: Captured Organised & Accessible
Better Artificial Intelligence Stock: CrowdStrike Holdings Vs. SentinelOne
Today's cyber threats are sophisticated and more challenging to detect than ever. Traditional antivirus software doesn't keep your computer as safe as it once did. This mismatch has paved the way for up-and-coming companies like CrowdStrike Holdings (NASDAQ: CRWD) and SentinelOne (NYSE: S) to thrive, as they bring new-age technology to the table.
These companies protect some of the world's largest corporations, are rapidly growing, and are taking market share from dated competitors.
Start Your Mornings Smarter! Wake up with Breakfast news in your inbox every market day. Sign Up For Free »
The secret? Both leverage AI technology to respond to and evolve in the face of new threats in real time. Both companies have the makings of promising long-term tech stocks, but one has an edge as the better AI stock to buy today.
Here is what you need to know.
CrowdStrike and SentinelOne specialize in endpoint protection, which means they secure any virtual or physical device connected to a network, such as a laptop, tablet, mobile phone, etc. However, each company has steadily evolved beyond endpoint security to grow, pushing into new niches within cybersecurity like cloud and identity security.
Traditional antivirus programs log known threats and then monitor a device for them. Naturally, they are blind to threats they aren't familiar with. CrowdStrike and SentinelOne monitor everything happening within a device, using AI to analyze patterns and behaviors to find things that don't belong. This leads to faster threat responses and even proactive protection against threats they haven't seen before.
Both companies score resoundingly well in third-party security evaluations. Prestigious IT firm Gartner has rated CrowdStrike a leader in endpoint security for five consecutive years, and SentinelOne is on a four-year streak of its own. CrowdStrike and SentinelOne have garnished virtually identical ratings on Gartner's Peer Insights program, with respective average ratings of 4.8 and 4.7 out of 5, each based on over 1,500 reviews.
Is one better poised to be than the other in the future? Maybe, though, it would probably be splitting hairs to say so. Yet taking a look back, it'd seem as if one has had a big advantage.
Despite their similarities, each company's stock has performed wildly differently:
CRWD ChartSentinelOne has dramatically lagged CrowdStrike for a few reasons.
First, CrowdStrike is much larger. The company has generated $3.5 billion in revenue over the past four quarters to SentinelOne's $723 million. There is little doubt that CrowdStrike has far more market share today, which somewhat labels a smaller peer like SentinelOne as an underdog.
Story Continues
Second, CrowdStrike has been far more profitable than SentinelOne. CrowdStrike converts nearly 30% of its revenue to free cash flow and is GAAP profitable. SentinelOne only recently became free-cash-flow positive and is still losing money on a GAAP basis.
Lastly, SentinelOne went public in the summer of 2021, near the peak of the "Everything Bubble" caused by zero-percent interest rates. At its peak, SentinelOne traded at a blistering enterprise value-to-sales ratio of more than 120. This unsustainable valuation has steadily eroded over the past three years due to revenue growth and share price declines.
In summary, SentinelOne had inferior fundamentals and was way too expensive. The stock's poor performance makes total sense.
The stock that once traded at an enterprise value-to-sales ratio of more than 120 trades at a fraction of that today. SentinelOne's valuation is now half of CrowdStrike's:
S EV to Revenues ChartThat seems fair based on events to this point, but SentinelOne has enough going for it that it may start to close that gap.
The company is marching toward profitability as revenue grows. It is well funded, with $708 million and zero debt, which it can use to invest in marketing, product development, and acquisitions. SentinelOne recently landed a blockbuster deal with Lenovo to provide security software on its PCs, which should help it maintain its robust growth.
I'll give CrowdStrike credit; the company grew revenue almost as fast as SentinelOne last quarter despite working on a much larger base number. However, CrowdStrike caused a historic IT outage over the summer. It's fair to wonder whether SentinelOne and others can steal some business from CrowdStrike as customer contracts expire. Only time will tell. But between this, the Lenovo deal, and SentinelOne's smaller size, I like the underdog's odds of growing faster for longer.
As SentinelOne grows and its financials improve, look for the stock to earn a higher valuation and have a turn as the better-performing stock. SentinelOne is my AI stock pick moving forward.
Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this.
On rare occasions, our expert team of analysts issues a "Double Down" stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves:
Amazon: if you invested $1,000 when we doubled down in 2010, you'd have $22,292!*
Apple: if you invested $1,000 when we doubled down in 2008, you'd have $42,169!*
Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $407,758!*
Right now, we're issuing "Double Down" alerts for three incredible companies, and there may not be another chance like this anytime soon.
See 3 "Double Down" stocks »
*Stock Advisor returns as of October 28, 2024
Justin Pope has positions in SentinelOne. The Motley Fool has positions in and recommends CrowdStrike. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.
Better Artificial Intelligence Stock: CrowdStrike Holdings vs. SentinelOne was originally published by The Motley Fool
Comments
Post a Comment