Lots of you’ll have seen the information that Governor Gavin Newsom has vetoed SB 1047, the groundbreaking AI security invoice that overwhelmingly handed the California legislature. Newsom gave a disingenuous explanation (which nobody on both aspect of the talk took significantly), that he vetoed the invoice solely as a result of it didn’t go far sufficient (!!) in regulating the misuses of small fashions. Whereas unhappy, this doesn’t come as an enormous shock, as Newsom had given clear prior indications that he was more likely to veto the invoice, and lots of observers had warned to anticipate him to do no matter he thought would most additional his political ambitions and/or fulfill his strongest lobbyists. In any case, I’m reluctantly pressured to the conclusion that both Governor Newsom doesn’t learn Shtetl-Optimized, or else he by some means wasn’t persuaded by my post last month in support of SB 1047.
Lots of you’ll additionally have seen the information that OpenAI will change its structure to be a completely for-profit firm, abandoning any pretense of being managed by a nonprofit, and that (probably relatedly) nearly nobody now stays from OpenAI’s founding staff aside from Sam Altman himself. It now seems to be to many individuals just like the earlier board has been 100% vindicated in its worry that Sam did, certainly, plan to maneuver OpenAI distant from the nonprofit mission with which it was based. It’s a disgrace the board didn’t handle to elucidate its considerations clearly on the time, to OpenAI’s workers or to the broader world. After all, whether or not you see the brand new developments nearly as good or dangerous is as much as you. Me, I kinda appreciated the earlier mission, in addition to the expressed beliefs of the earlier Sam Altman!
Anyway, actually you’ll’ve recognized all this when you read Zvi Mowshowitz. Broadly talking, there’s nothing I can probably say about AI security coverage that Zvi hasn’t already mentioned in 100x extra element, anticipating and responding to each conceivable counterargument. I’ve no clue how he does it, however if in case you have any curiosity in these issues and also you aren’t already studying Zvi, begin.
No matter any setbacks, the work of AI security continues. I’m not and have by no means been a Yudkowskyan … however nonetheless, given the empirical shock of the previous 4 years, I’m now firmly, 100% within the camp that we have to method AI with humility for the magnitude of civilizational transition that’s about to happen, and for our large error bars about what precisely that transition will entail. We are able to’t simply “go away it to the free market” any greater than we might’ve left the event of thermonuclear weapons to the free market.
And sure, whether or not in academia or working with AI firms, I’ll proceed to consider what theoretical pc science can do for technical AI security. Talking of which, I’d love to rent a postdoc to work on AI alignment and security, and I have already got candidates. Would any individual of means who reads this weblog prefer to fund such a postdoc for me? In that case, shoot me an electronic mail!
You may leave a response, or trackback from your individual website.