Walker Opening Statement at Al Counterterrorism Hearing
|WASHINGTON – Subcommittee on Intelligence and Counterterrorism Ranking Member Mark Walker (R-N.C.), today delivered the following opening statement at a subcommittee hearing entitled “Artificial Intelligence and Counterterrorism: Possibilities and Limitations.”
I want to thank Chairman Rose for holding this hearing today. I look forward to hearing from our distinguished panel on the capabilities and limitations in utilizing artificial intelligence, or AI, to monitor online extremist content.
The ingenuity and superiority of the U.S. private sector continues to drive the development of new technologies, products and services that have revolutionized the world. The development of AI is another example and we have only seen the beginning of what this technology can do.
AI has the potential to address a variety of major global issues, and research and development is happening across all sectors. U.S. educational institutions, including those in my home state of North Carolina, are leading cutting edge research into healthcare, pharmaceuticals, transportation, data science, and many more fields.
Today, we are primarily reviewing how the technology is used by social media companies to operate their platforms and identify content that may need to be removed.
It is clear that technology is not a silver bullet for identifying and removing extremist content, given the volume of content uploaded every second on social media platforms.
AI technology, in its current form, is limited and cannot currently evaluate context when reviewing content. For example, there have been a number of notable examples in the past few years where AI has flagged portions of the Declaration of Independence and removed historical images from media reports. We must also be mindful that algorithms and content moderation policies are ultimately subjective, as they are developed and operated by humans who possess their own bias.
As legislators, we must proceed with caution on the appropriate role for Congress in this situation, understanding the potential to stymie free speech.
We also must recognize that the social media companies themselves have a First Amendment right to host, develop and modify their terms of service and content moderation policies to foster an open and free space for expression.
We have come to a crossroads in the debate on what content should be prohibited from social media platforms and the appropriate mechanisms to identify and remove such content. Today’s hearing will help us to further our understanding of the current capabilities of AI technology and receive recommendations on what more the social media companies could be doing regarding the application of AI relating to content moderation.
At a minimum, we need to discuss the continually changing terms of service implemented by many of the companies and the need for greater transparency in how they are making content removal decisions, not only to the individual users, but also to the community as a whole.
I look forward to the testimony and I want to thank the witnesses for appearing here today. I yield back the balance of my time.