Aspen Institute’s Global Cybersecurity Group Offers Government Guidance on Generative AI and Cybersecurity

January 16, 2024

New paper provides insight on regulation and oversight, addressing impact and risks of these emerging tech tools in cybersecurity contexts

 

CONTACT: Victoria Comella
High10 Media
victoria@high10media.com

Washington, D.C. – January 16, 2024 – The Aspen Institute’s Global Cybersecurity Group has released Generative AI Regulation and Cybersecurity: A Global View of Policymaking. The new paper delves into the multifaceted ways generative artificial intelligence (AI) is transforming cybersecurity as well as the opportunities and challenges that these rapidly developing technologies present for regulatory endeavors. This cybersecurity effort is led by the Aspen Digital program, which works at the intersection of technology, information, security, and the public good.

Cyberattacks are a daily occurrence that threaten financial stability, national security, and public safety. The actions governments and organizations take today will lay the foundation that determines who benefits more from the emerging capabilities of generative AI – defenders or attackers. As governments and civil society groups navigate this uncharted territory, it is crucial that they consider the potential of these tech tools to bolster security while remaining vigilant to the ethical and practical considerations that accompany its deployment. 

“Effective governance and regulation will require finding a balance between hope and fear, and it’s okay to take some time to get that right,” said Jeff Greene, Senior Director of Cybersecurity Programs at Aspen Digital. “These technologies will change lives, both for better and for worse.”

As this technology continues to advance, the Global Cybersecurity Group offers an analysis of the current regulatory efforts and limitations in the domain of cybersecurity. 

Recommendations for fostering effective government action include:

  • Start with the end user in mind. Before governments act, they need to have a clear objective, beyond mitigating risks or minimizing harms. If they do not know what conduct, outcomes, or values they are advancing for their citizens, their efforts are unlikely to be successful.
  • Assess criminal and civil liability. Current laws were written without consideration of generative AI and, in many cases, before it was even imagined. At minimum, governments should review current statutes to see if they need revision to account for these developments and the legal disputes that could come with them.
  • Consider technology safeguards and feasibility. The full uses and applications of these emerging technologies will never be easily defined, since the possibilities for utilization increase as they develop. Any regulatory and legal safeguards proposed must be flexible to keep up with advances.
  • Establish standards. Standards serve as the operational bedrock for AI, intricately weaving systems, processes, and tools into a cohesive fabric. AI standards are poised to define the future landscape of innovation.

Cautions and caveats for government action include:

  • Creating consent fatigue – A unified labeling scheme indicating the presence of AI-generated content will make misrepresentation easier to police.
  • Mistaking actions for results – Governments should leverage existing tools to help address the most pressing security concerns and give the appropriate time to think through the risks that materialize.
  • Ignoring the openness of generative AI tool access – Governments need to carefully consider which parts of these technologies can or should be open to the general public.

Governments and regulatory bodies will continue to account for the implications of these systems throughout 2024 and grapple with dataset openness, human oversight, and transparency of commercial generative models, all which could reduce risks. Industry self-regulation, governance, and codes of ethics are also constructive steps.

Generative AI Regulation and Cybersecurity is available to download on aspendigital.org.

Members of the press are invited to attend Beyond the Clickbait: the Impact of AI on Cybersecurity, on Tuesday, January 16, from 10:00 to 11:30am ET. The event will be streamed online for free. Attendees will hear from Former NSA Deputy Director George Barnes, White House Special Advisor on Artificial Intelligence Ben Buchanan, Gibson Dunn Partner Jane Horvath, Omidyar Network Responsible Technology Director Govind Shivkumar, Dell Technologies VP & Business Unit Security Officer Bobbie Stempfley, Google Senior Engineering Director Amanda Walker, and RAND Corporation Senior Researcher Jonathan W. Welburn. Reporters may learn more and register online. 

###

ABOUT ASPEN DIGITAL

Aspen Digital is a nonpartisan technology and information-focused organization that brings together thinkers and doers to uncover new ideas and spark policies, processes, and procedures that empower communities and strengthen democracy. This future-focused Aspen Institute program inspires collaboration among diverse voices from industry, government, and civil society to ensure our interconnected world is accessible, safe, and inclusive – both online and off. Across its initiatives, Aspen Digital develops methods for elevating promising solutions and turning thought into networked impact. To learn more, visit aspendigital.org or email aspendigital@aspeninstitute.org.

ABOUT THE ASPEN INSTITUTE

The Aspen Institute is a global nonprofit organization whose purpose is to ignite human potential to build understanding and create new possibilities for a better world. Founded in 1949, the Institute drives change through dialogue, leadership, and action to help solve society’s greatest challenges. It is headquartered in Washington, DC and has a campus in Aspen, Colorado, as well as an international network of partners. For more information, visit www.aspeninstitute.org.

 

View Comments
0