The most dangerous tendency of all:
These massive surveillance AI companies are moving to become defense contractors,
providing weapons and surveillance infrastructures to militaries and governments they choose to arm and cooperate with.
We are all familiar with being shown ads in our feeds for yoga pants (even though you don’t do yoga)
or a scooter (even if you just bought one),
or whatever else.
We see these because the surveillance company running the ad market or social platform has determined that these are things
“people like us” are assumed to want or be attracted to,
based on a model of behavior built using surveillance data.
Since other people with data patterns that look like yours bought a scooter, the logic goes,
you will likely buy a scooter
(or at least click on an ad for one).
And so you’re shown an ad.
We know how inaccurate and whimsical such targeting is.
And when it’s an ad it’s not a crisis when it’s mistargeted.
But when it’s more serious, it’s a different story.
We can trace this story to the post-9/11 US drone war,
with the concept of the #Signature #Strike.
A signature strike uses the logic of ad targeting,
determining targets for death based not on knowledge of the target
or certainty about their culpability,
but based on data patterns and surveillance of behavior that the US,
in this case, assumes to be associated with terrorist activity.
Signature strikes kill people based on their data profiles.
And AI, and the large scale surveillance platforms that feed AI systems,
are supercharging this capability in incredibly perilous ways.
We know of one shocking example thanks to investigative work from the Israeli publication 972,
which reported that the Israeli Army, following the Oct 7th attacks,
is currently using an AI system named #Lavender in Gaza,
alongside a number of others.
Lavender applies the logic of the pattern recognition-driven signature strikes popularized by the United States,
combined with the mass surveillance infrastructures and techniques of AI targeting.
Instead of serving ads, Lavender automatically puts people on a kill list
based on the likeness of their surveillance data patterns to the data patterns of purported militants
– a process that we know, as experts, is hugely inaccurate.
Here we have the AI-driven logic of ad targeting,
but for killing.
According to 972’s reporting, once a person is on the Lavender kill list,
it’s not just them who’s targeted,
but the building they
(and their family, neighbors, pets, whoever else)
live is subsequently marked for bombing,
generally at night when they (and those who live there)
are sure to be home.
This is something that should alarm us all.
(4/8)