header logo image

U.S. Releases an AI Bill Of Rights That Though Encouraging Won’t Yet Move the Needle – JURIST

October 7th, 2022 1:45 am

Dr. Lance Eliot, an expert on AI & Law currently serving as a Stanford Fellow, argues that while the new US AI Bill of Rights is not a game changer, it is a step in the right direction...

What rights should we have in a society increasingly being scrutinized, monitored, and controlled via the use of Artificial Intelligence (AI)?

Thats a good question.

To address this thorny and unresolved legal issue, the US White House released on October 4, 2022, a white paper informally referred to as anAI Bill of Rights, which more officially is entitled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. The document is the work of the Office of Science and Technology Policy (OSTP), a federal entity that was established in the mid-1970s and serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance.

Lets unpack the AI Bill of Rights and examine the pros and cons of this latest pronouncement pertaining to AI and the law.

Rightfully Thinking About AI And Human Rights

The naming of this as an AI Bill of Rights is a bit askew since it might inadvertently suggest that these are rights associated with AI systems that are considered reaching sentience or otherwise nearing a point of legal personhood. Not so. To clarify, this 73-page long document is abouthuman rightsamid the ongoing onslaught of AI systems that are being deployed without sufficient attention to humankinds safety and well-being.

You might be aware that AI has been put into use by numerous private and public organizations and has ended up acting in a variety of discriminatory ways. Our civil rights and civil liberties are under attack by how AI is crafted and utilized. AI at times is ruinously undercutting data privacy. AI permeates all manner of social media and can wrongfully suppress the speech of those criticizing hate speech, ironically so. AI can be used to stalk someone across both electronic and physical worlds, endangering their personal safety.

On and on, the litany of AI endangerment goes.

A technical companion portion within the AI Bill of Rights describes dozens of real-world examples showcasing how AI is being improperly devised and fostering potential harm. The examples suffice to get the hair standing on the back of your neck. As an additional harbinger of concern, keep in mind that AI is expansively being rolled out and will ultimately be ubiquitous. You can anticipate a non-stop barrage of AI amidst nearly all of our daily apps on our smartphones and likewise AI-powered applications used by major companies and by governmental agencies.

If we are inexorably going to be immersed in an AI-permeated way of existence, the logical response is to stand up for the rights of humankind. Thus, the reasoned basis to forge an AI Bill of Rights that can valiantly protect people.

The US Constitution famously has a historic Bill of Rights that includes vital guarantees of personal freedoms and mindfully addresses the codification of legally stipulated rights. The first ten amendments of the Constitution are breathtaking in their scope and significance. This AI Bill of Rights attempts to leverage the revered nature of the Bill of Rights to draw public attention to what needs to be considered in an AI era (some might readily criticize trying to somewhat exploit the famed Bill of Rights in this naming manner, perhaps overstepping a proper sense of decorum, though it could be a small price to pay for engaging society in the upcoming AI legal morass).

The AI Bill of Rights posits five keystones (excerpts quoted from the official white paper as cited earlier):

AI that is programmed by humans can contain a plethora of hidden risks.

I am not alluding to existential risks such as AI that rises and takes over humanity (we arent yet in that ballpark). The kind of AI that is being confronted consists of non-sentient algorithmic AI. Efforts to legislatively contend with algorithmic AI include the U.S. Congressional ongoing efforts toward crafting the Algorithmic Accountability Act, while in the European realm there is the EU Artificial Intelligence Act (AIA) currently under review.

An Appetizer But Not A Meal

You would be hard-pressed to argue against the proposed precepts of the newly unveiled AI Bill of Rights. The five keystones are indubitably sensible. It is possible to quibble with some of the wording here or there, but overall, the indicated protections are what we need to be diligently considering.

That being said, the AI Bill of Rights has perhaps only whetted our appetite. Envision that this is the precursor or appetizer leading up to a fuller meal.

We have already seen this appetizer in other guises, such as the US Department of Defense (DoD) officially statedEthical Principles of AI and even the somewhat comparabledirectivesby the Vatican in itsRome Call For AI Ethics. A much more extensive elucidation of these types of AI-relevant humankind rights was well-documented in theRecommendation on the Ethics of Artificial Intelligence released last year by UNESCO (United Nations Educational, Scientific, and Cultural Organization) which garnered adopted approval by 193 member countries of the United Nations [8].

In that sense, the AI Bill of Rights has a lot to draw upon and yet also measure up to.

The AI Bill of Rights can be said to be insufficient in many ways, including but not limited to:

Despite those aforementioned insufficiencies, there is certainly something to be said about trying to place a stick in the ground and get the ball rolling on the regulatory governance of AI. Apparently, selected areas of the U.S. federal government will attempt to try out the five keystones of the AI Bill of Rights (as suggested in the white paper as part of leading by example). The belief seems to be that this will illuminate the efficacy of the keystones and reveal ways to bolster and sharpen them.

Lawmakers are ultimately going to be in the drivers seat on all of this.

Those tasked with making our laws are going to be immensely challenged with the complicated chore of bringing together a veritable smorgasbord of recommended soft-law AI ethical practices and patchwork hard-law AI laws that are springing up throughout the states. Furthermore, our lawmakers should be wisely eyeing the globally emerging AI soft-laws and AI hard-laws that are available for the world to see and reuse.

Make no mistake, all of this is a burgeoning part of the law and growth is abundant.

Attorneys and law students will soon see that AI & Law is bubbling up to the surface. As more AI is devised and unleashed, companies and governments will need to seek out savvy AI-aware legal advisors. Meanwhile, the coming glut of new or imagined AI laws will require legal minds that can ensure that the laws as codified are sensible and practical. And the potential harms produced by AI will require lawyers that are willing to fight for humankinds rights against the blitz of dour AI systems.

Per the wisdom of Louis Brandeis, former Associate Justice of the U.S. Supreme Court: If we desire respect for the law, we must first make the law respectable.

Lets all get into the action and make humankinds rights associated with the advent of AI a top priority. It assuredly seems like a respectable thing to do.

About The Author

Dr. Lance Eliot is a global expert on AI & Law and serves as a Stanford Fellow affiliated with the Stanford Law School (SLS) and the Stanford Computer Science Department via the Center for Legal Informatics. His popular books on AI & Law are highly rated and he has been an invited keynote speaker at major law industry conferences. His articles have appeared in numerous legal publications includingMIT Computational Law Journal, Robotics Law Journal, The AI Journal, Computers & Law Journal, Oxford University Business Law (OBLB), New Law Journal, The Global Legal Post, Lawyer Monthly, Legal Business World, LexQuiry, The Legal Daily Journal, Swiss Chinese Law Review Journal, The Legal Technologist, Law360, Attorney At Law Magazine, Law Society Gazette, and others. Dr. Eliot serves on AI & Law committees for the World Economic Forum (WEF), United Nations ITU, IEEE, NIST, and other standards boards, and has testified for Congress on emerging AI high-tech aspects. He has been a professor at the University of Southern California (USC) and served as the Executive Director of a pioneering AI research lab at USC. He has been a top executive at a major Venture Capital (VC) firm, served as a corporate officer in several large firms, and been a highly successful entrepreneur.

Acknowledgment

This research is part of an ongoing initiative on AI & Law and thanks go to the Stanford University CodeX Center for Legal Informatics, a center jointly operated by the Stanford Law School (SLS) and the Stanford Computer Science Department. CodeX s emphasis is on the research and development of computational lawthe branch of legal informatics concerned with the automation and mechanization of legal analysis.

Suggested citation: Lance Eliot, U.S. Releases An AI Bill Of Rights That Though Encouraging Wont Yet Move The Needle, JURIST Academic Commentary, October 5, 2022, https://www.jurist.org/commentary/2022/10/u-s-releases-an-ai-bill-of-rights-that-though-encouraging-wont-yet-move-the-needle/.

This article was prepared for publication by Ingrid Burke-Friedman, Features Editor. Please direct any questions or comments to her at commentary@jurist.org

Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.

Read the original here:
U.S. Releases an AI Bill Of Rights That Though Encouraging Won't Yet Move the Needle - JURIST

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick