Gautam Mukunda, Columnist

How Should We Regulate AI? The Same Way We Do Airlines

A federal agency tasked with both controlling and promoting the technology — paired with help from the states — could keep things safe without stifling innovation. 

Planes where also once a fledgling technology.

Photographer: Kevin Carter/Getty Images North America

Although there’s no shortage of AI hype in Silicon Valley, it’s impossible to miss how many of the technology’s biggest proponents are also concerned about its possible dangers. The idea that artificial intelligence could become an existential threat is on the table: OpenAI CEO Sam Altman has speculated about AI’s potential for hacking into essential computer systems or designing a biological weapon and Tesla Inc. CEO Elon Musk has described it as “potentially more dangerous than nukes.” Even if it never comes to that, improperly designed AI systems could still be very dangerous in critical applications, ranging from self-driving cars to healthcare.

This makes regulating AI seem like a no-brainer. Yet there is no federal law or agency that broadly does so and, as evidenced by the failed attempt to include a ban of state regulation in the Trump administration’s spending bill, there are some in government and industry who would like to roll back the state-level efforts that do exist.