For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California.
Say what? Well, if you’re counting the zeroes, that’s 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations each second, using a measure known as flops.
What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.
It’s already dangerous. Lavender AI has been used by Israel to “identify Hamas agents”. Anduril (a name they do not deserve) is working on drones that will autonomously identify targets and attack them in the future. . It’s already dangerous. It’s already enabling killing to be done without remorse.