First, ML models are a giant pile of linear algebra.
In short, the ML industry is creating the conditions under which anyone with sufficient funds can train an unaligned model.
One thing you can do with a Large Language Model is point it at an existing software systems and say “find a security vulnerability”.
ML systems sometimes tell people to kill themselves or each other, but they can also be used to kill more directly.
ML systems are going to be used to kill people, both strategically and in guiding explosives to specific human bodies.
49 минут назад @ aphyr.com
infomate
