In what could be a pivotal week for our relationship with artificial intelligence (AI), the UK will host a two-day AI Safety Summit at Bletchley Park from November 1-2, and, on October 30, President Biden issued a new Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.”
Public debate and scrutiny of the new techno-social contract precipitated by powerful AI systems has, in general, focused on the risks posed by adaptation of what are now labelled “Frontier AI” technologies for malevolent purposes.
Such adaptations may be the result of human intent, model drift, or (eventually) self-organizing AI systems that establish their own objective functions in response to the evaluated context.
While the latter case remains closer to science fiction than science fact, development of collaborative autonomous agents powered by new advances in machine intelligence may realize AI capability in the near future.
In June 2022, one of the so-called “godfathers of AI”, Yan LeCun, VP & Chief AI Scientist at Meta, set out his vision of “a path towards machine intelligence” in a working paper of the same title.
While Yan LeCun takes a praxeological perspective on the safety of AI systems – arguing that humans will ultimately moderate the effect of intelligent machines.
This is at odds with his fellow “godfathers” and 2018 ACM A.M. Turing Award Laureates, Yoshua Bengio and Geoffrey Hinton, who reiterate the risks posed by the evolutionary trend in AI development and advocate for regulatory controls to be established and enhanced governance within AI research and applications.
Recent clarion calls calling for a pause in AI development and definition of new governance frameworks, coupled with perpetual framing of discussions about AI regulation by hypothesized “existential risk”, have brought the issue of AI adoption to forefront of public consciousness.
This week’s trans-Atlantic government initiatives, along with the announcement by the United Nations General Secretary on October 27 of a “High-Level Advisory Body on Artificial Intelligence” and continued progress towards ratification of the EU AI Act, represent only the first steps towards regulatory control of AI-related activity.
However, what I consider to be one of the most noteworthy aspects of the Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence” published by The White House is the explicit reference to the protection of privacy – where breach of data privacy is a real risk posed by complex AI systems.
Responding to the Executive Order, NIST (the U.S. National Institute of Standards and Technology) identified its role in creating “guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.” The scope of the Executive Order, however, appears more wide-reaching and consistent with the work that Fortanix has been doing with customers in the AI domain for several years.
President Biden as directed the prioritization of federal support for accelerated “development of privacy-preserving techniques – including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data” and strengthening of “privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy.”
These vital initiatives direct attention away from the hyperbole surrounding massive, foundational, frontier AI models towards qualification of the data used to train them and the provision of adequate security for that sensitive information.
By taking a data-first approach to security, Fortanix is at the forefront of data privacy and security provision for AI developers and AI adopters. Indeed, we launched our managed Confidential AI service for secure model training and inference nearly two years ago.
By gaining control over the use of data by scientists, software developers, and AI applications, improved transparency and auditability of AI models can be established, in line with emerging regulatory objectives.
Crucial issues for the AI community include detection and mitigation of inherent model bias introduced by training data, management of data subjects’ consent and intellectual property rights within AI workflows, preservation of privacy and confidentiality within model deployment, and attribution of model function to discrete data.
All of these important areas of AI governance demand appropriate control of data privacy and security, leveraging the power of advanced cryptographic solutions and practical privacy-preserving technologies (PETs) such as Confidential Computing, which Fortanix has pioneered.
Global negotiation of a new techno-social contract encompassing advanced AI capabilities is an undoubted challenge of our time. Yet many of the associated risks of AI adoption are well understood.
Data privacy, security, and confidentiality are integral to safe and responsible AI development. The Executive Order set out by President Biden represents an important and explicit recognition of some of the real and imminent risks posed by integration of our data within powerful AI technology.
Fortanix provides a range of proven capabilities and experience to support customers transition to the emerging regulatory AI environment and we look forward to participating in new public initiatives taking a data-first approach to AI safety.