Recently I had the opportunity to talk at California State Assembly opposing the #SB1047 #AI bill trying to regulate frontier models. I felt fortunate to have my voice heard and felt that it was an important step towards drafting legislation that hears the concerns of #small-businesses.
Some of you know about my concerns about the 100M USD #training cost cap, the #definition of AI for that bill, and other gray areas like: what are training costs for the purpose of the bill - data gathering, data cleaning, labor used creating the code, electricity, hardware purchase and depreciation, etc.?. Other important aspects include that 100M investments in training do not guarantee that a given AI model would be a "#frontiermodel", as training costs vary widely with cost of #labor and #electricity - and both are cheaper outside of #California.
On the other hand, representing Benchmark Labs Inc. an AI Company at the forefront of AI based #weather forecasting, the aspect of making AI companies liable for the misuse of their models' output, is deeply troublesome, as not NOAA: National Oceanic & Atmospheric Administration or other weather information providers are liable for the potential misuse of their information.
Consider this very real use case: #farmers, #land managers, #energy companies can make better asset management decisions based on #future weather conditions; however, a bad actor with the same information might decide when is a good time to start a #wildfire, or spread airborne contaminants. - History fact: Weather forecasts aided the allies during D-Day-. How would a weather forecasting company be responsible for the bad actor's actions just because it uses AI, while non-AI companies and institutions are not?
We already have #anti-terrorism legislation, why would we, as a society, be going after #core-technologies based in #math and #science instead of targeting the crime instead?