Guardrails for a Minefield

By Rafael Hoffman

No AI Artificial Intelligence, Ban AI, Stop AI, Technology, Robot, Futuristic, Data Science, Data Analytics, A.I.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The above one-sentence statement, released a few months ago, was signed by over 50 tech leaders and computer scientists who took the lead in developing advanced artificial intelligence (AI). That list included Bill Gates; Sam Altman, CEO of OpenAI; and Geoffrey Hinton, who resigned from Google with the express purpose of gaining independence to raise awareness of the dangers posed by the newfound prowess of this technology.

This and other similarly jarring statements by AI experts, coupled with awareness of the capabilities of the technology gleaned by many in the general public due to the availability of chatbots (computer programs that robotically mimic human conversation), spurred policy makers in Washington to look seriously at regulatory steps to reign in this phenomenon.

Congress held a series of hearings and various attempts at legislation are in the works. The White House rolled out a five-point “AI Bill of Rights,” which it hopes will serve as a blueprint for industry standards and Congressional action.

The U.S. is behind some other governments, namely the European Union, which has acted broadly to regulate AI in some areas.

Despite over a decade of social media companies using their power to evade meaningful regulation, some experts are optimistic, noting that, in the present instance, AI moguls themselves urged government action.

At a recent hearing, Illinois Democratic Senator Dick Durbin, who chairs the Judiciary Committee, noted that it was “historic” to have “people representing large corporations … come before us and plead with us to regulate them.

Yet skepticism remains. Despite testifying in Congress about the need for guardrails, Sam Altman said the EU’s regulations had gone too far and pushed for changes that would favor the industry. Several senators noted that government foot-dragging and capitulation to Big Tech in the past does not set a promising stage for the future.

James Hendler

To gain a better understanding of the matter, Hamodia spoke with James Hendler, director of the Future of Computing Institute and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic institute, in Troy, New York. He is also director of the RPI-IBM Artificial Intelligence Research Collaboration. In addition, Professor Hendler is the chairman of the Association for Computing Machinery’s (ACM) U.S. Technology Policy Committee, which has developed a set of advisory guidelines for AI regulation for the White House.

How would you address the level of challenge posed by AI and to what extent can government mitigate those challenges?

I don’t see AI as an existential threat, but it does present serious challenges, some of which could be alleviated with effective regulation.

If we look back around 20 years ago, America essentially decided that it was not going to regulate social media. Now, this has become a very political issue, but those discussions are about content. What should have definitely been done early on was to pass laws about privacy, libel and slander laws, and so on. Some very serious issues could have been mitigated had government acted when social media was in its early stages. Playing catch-up now is very difficult.

This is the time for the U.S. and the international community to seize the moment and prevent that from happening with AI.

What are the key concerns that you think government can effectively regulate?

The three general categories that should be addressed are accountability, transparency, and identification.

Accountability comes down to who is responsible for harm AI causes. If I ask AI to write an article and I then publish the product containing non-factual information, who is responsible for that, me, the system developer, or who? That’s one question government needs to provide an answer for and, depending on how it’s dealt with, companies and users will have to take far more caution in designing and using this technology.

Transparency is mostly about companies being open about how their algorithms work. Right now, all AI issues are basically covered by laws related to intellectual property. For example, a program exists that advises judges how to sentence someone and recommends how likely the person is to re-commit a crime to help with sentencing. The system claimed to be using AI. Now a case went all the way to the Supreme Court as to whether judges could use this system. The ruling was that it could be used if it was in conjunction with other considerations, but it also said that there must be transparency about how the system operates. What factors does it take into account and how does it weigh them? In the end, although it claimed to be using AI, it was not, it was just a simple rules-based code, but the concept is applicable to many AI systems.

If you want to give an example especially pertinent to your readers, imagine an AI facial recognition system is being used to determine something about members of the Orthodox community; it might make multiple mistakes between various people who look different, but all have black hats, a beard, and peyos. It’s not making an error within the rules it knows, but that betrays a lack of sophistication in the system. When the community standard is similar, it might fail more often. Knowing how the system works would let you know about the shortcomings it has.

Very recently, a group of large AI-involved companies agreed to let their codes be inspected. That’s a good first step, but government should not leave them to do it on their own terms.

Identification is the last major category. That means that I should be able to tell if something is AI generated or not. A few months ago, someone circulated a video of the Pentagon being bombed and it took close to a half hour for people to realize it was a fake. In the meantime, the stock market fell several hundred points. Now, no one knows whether that was done to try to manipulate the stock market, to cause a panic, or just for fun, but as things are now, no law was broken by playing this dangerous prank. If we had laws in place requiring AI-generated material to bear some kind of watermark or identification, and made the creators liable for heavy fines if they remove that mark, this could have likely been avoided. Even if not, at least the perpetrators would have been punished.

Another area where identification standards would be important are in media and reporting. Suppose a journalist posts an interview with somebody, but it was faked using some form of AI. Now doing these types of tricks has been possible for a long time, but AI makes them much cheaper and easier to do. Requiring identification and punishments for breaking those standards creates a far safer playing field.

Policy makers seem focused on privacy concerns. What is the nature of these issues and how can they be addressed?

Alphabet CEO Sundar Pichai (L) and OpenAI CEO Sam Altman arrive at the White House for a meeting with Vice President Kamala Harris on artificial intelligence, May 4. (AP Photo/Evan Vucci, File)

If a person goes to their doctor and describes their medical history, family history, and symptoms, that information is very strictly protected from being shared by HIPPA laws (Health Insurance Portability and Accountability Act of 1996). Now, we have a lot of people, wisely or not, asking medical questions to chatbots, but there are no laws specifically keeping the information you share private. Moreover, we don’t really know what the companies themselves are doing with this information. Is it being stored or sold?

Another related angle is an emerging type of intellectual property dispute created by wide use of AI-driven language systems. You can ask one of them to write an article in the style of a given author and topic. What it will come up with will be partially original, but it will often have sentences or paragraphs lifted directly from that author’s work. Is that violation of copyright, or is it fair use? We have clear rules about what is and is not plagiarism when a newspaper reporter does that, but when it comes to AI, the law is very vague and needs updating.

It’s even worse in the art world, where systems have been trained on existing pieces of art and when they make new creations, they contain bits and pieces of art created by actual artists. So these are also questions that government could clarify with regulatory guidance and some legislation.

What specific regulatory policies would you recommend to address these issues?

Samuel Altman, CEO of OpenAI, testifies during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill on May 16. (ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

The Association for Computing Machinery (ACM), which is the largest professional society for computer scientists, just released a report on regulatory suggestions relating to AI through its Global Technology Policy Council, of which I am the outgoing chairman. We’ve also testified to Congress about these matters.

Most of them suggest laws and oversight dealing with the topics we’ve discussed, transparency, accountability, identification, and privacy. We also have some other points holding companies responsible for ensuring that their products run in responsible ways that will keep human controls on capabilities and avoid creating systems that could lead to some of the catastrophic outcomes we’ve heard about. The White House’s recent Blueprint for an AI Bill of Rights is based on five points that largely cover the concepts and some of the methods we recommended.

The ideas of what they want to address are good, but they’ve left out some points on how to regulate effectively, which we think would be important.

Do you favor creating a new federal agency to deal with AI or instructing each existing agency to develop relevant guidance?

Rep. Mike Gallagher (R-Wis.), chairman of the House Armed Services’ Cyber, Information Technologies, and Innovation Subcommittee, leads a hearing on adopting and deploying artificial intelligence effectively in the modern battlefield, at the Capitol, July 18. (AP Photo/J. Scott Applewhite)

There have been calls for a government agency specifically responsible for AI regulation, just like the FDA is responsible for drug safety and the FCC governs communication. That idea has merit, and if I could have my druthers, that’s the option I would pick because, within limits, it could be an effective arm in coordinating policy and enforcement. But I think there’s pretty strong agreement that won’t happen in the U.S. That being the case, the more likely model would be for each agency to develop its own policies and find its role in AI regulation.

The result of that will likely be what we’ve seen in other sectors that competing rules end up being litigated and ironed out through the courts. Most of the existing agencies have authority to control some of the worst abuses of AI, but that model might be a little more disjointed and take longer to have a real effect.

To what extent can existing laws be applied, and to what extent are new ones needed?

I think it’s a combination. The vast majority of the things people are worried about could be taken care of through existing laws if government works out how to apply them to AI. But, if you look at something like watermarking, the FCC should have authority to enforce it, but it would take a law of Congress to create that requirement.

Technology advances quickly and government tends to move very slowly. Government still has done very little on social media, which it acknowledges is a problem. Is government capable of tackling the risks associated with AI?

There seems to be bipartisan agreement that AI regulation is needed. The argument is more on how strict or weak those regulations should be. Social media is a 20-year-old problem that now exists as a very powerful market force which is very difficult to reckon with.

The speed at which AI took some steps in the past few months surprised even many experts and certainly the general public, but it’s still in its early stages. That’s an advantage because there are still a lot of things that even if done at the speed of law would be impactful.

Now, a lot of the same huge companies that run social media are the same ones developing AI. But I still think that there’s room to act early on this. We can still close the barn doors rather than working to get the horses back in. That’s especially true since things like watermarking and demanding transparency are relatively easy to do.

There is a bipartisan bill presently being created in the Senate — we know Senator Schumer’s team has interviewed over 150 organizations, including mine. They are trying to come up with something that makes sense, and at the same time that will not stifle industry. AI is already an important economic tool and has a role in national security. It’s important that our regulation not hold back the U.S. and allow foreign countries to take the lead. We know that China has already invested heavily in AI development.

Do you think AI moguls are sincere in their requests for regulation or are they trying to keep regulation on their terms?

Not long after AI became a national subject, there was a proposal from a lot of big tech leaders calling for a six-month moratorium on development. That looked very altruistic, but if you think about it, who would that help? The large companies who signed the letter would be able to keep marketing the technologies they developed, while competition from start-ups would be stifled.

You do have to keep an eye on Big Tech motivations here. That said, there are some beneficial regulations that are in the interests of large companies as well. Watermarking would help them. All you need is one start-up that’s good at making fake AI-generated videos that cause a lot of chaos and the public could blame AI without differentiating between this little company and the big ones. In that sense, making a more controlled environment is beneficial to all legitimate actors.

What is your evaluation of the EU’s actions and what could be applied in the U.S.?

The EU has done a significant amount of work on privacy, but some other areas of their AI regulation are still being clarified. I think that their privacy rules will have an influence on how America deals with these issues, and that is important because it’s the type of thing that would be less effective if there are vastly different rules between countries. In general, though, the EU is more willing to restrict the corporate world than America, so there will be limits to what the U.S. borrows from them.

How optimistic are you that AI will be effectively regulated?

I’ve worked on AI for more than 40 years and I, and a lot of others in the field, were surprised by the rapidity at which some of this happened. A lot of that had to do with economics. We’ve said for a long time that if someone was willing to put 10 or 15 billion into this, a lot more could happen. Then OpenAI came along and pushed the big tech companies to do just that.

Even so, people should realize how far from perfect any of these systems are. When I give a talk somewhere, beforehand I usually ask one of the AI systems to give me five famous alumni from that college. So far it has never gotten all of them right.

There are things here that have real dangers, but most of the arguments about real catastrophic results are based on humans abusing AI as a tool.

The question becomes, how powerful of a tool is this? If AI is a hammer, then it could mostly be used to knock in nails, and we don’t need laws to prevent people from hitting someone over the head with it. If it’s a nuclear bomb, that’s a different story.

If we get some of these regulations in place, we can keep it closer to being a chainsaw, something that could do a lot of damage, but that has good uses and is easy enough to control.

To Read The Full Story

Are you already a subscriber?
Click to log in!