In the card game Uno there is a card that doesn’t directly assist the player but changes the nature of the game. If you play the Reverse card, the order of play swings back the other direction, from clockwise to counter or vice versa.
Often, when I think about a new technology or social change, I see the ways it can be abused. For example, police install microphones to pinpoint gunshots for rapid response. Cities are full of angles and sharp noises - how does the microphone differentiate and determine location? How many false alarms? More importantly, look at how an officer, determined to find a particular person responsible, can find an echo of a sharp noise in the person’s vicinity and implicate them in firing a gun.
Another example, this time from China. PRC has a travel “passport” of sorts to deal with Covid under their zero-tolerance policy. If you are vaccinated and recently tested negative, then you have a “green” and are free to move about the country and your life. A “yellow” means a possible exposure and you need to be tested again and a “red” places you in quarantine. In Hunan, there is currently a banking crisis where several rural banks promised huge returns on savings and are now denying withdrawals. Most assume that the money is gone in some sort of fraud scheme. Some savers traveled to Hunan with their “green” status to protest at the doors of the bank and when they went to leave the transit station were flagged as “red” and quarantined. This is an abuse of the Covid warning system.
I will not debate the politics of either example - they are both swings towards fascism and overt control of the populace - but let us examine the opposite. In the film and book Minority Report, the technology exists to warn the police of crimes before they occur. Similar systems have been trialed in the United States, to abysmal and racist results. A new paper tested the opposite, attempting to identify potential victims and connect them with social services: “Machine Learning Can Predict Shooting Victimization Well Enough to Help Prevent It,” by Heller, Jakubowski, Jelveh, and Kapustin
Out-of-sample accuracy is strikingly high: of the 500 people with the highest predicted risk, 13 percent are shot within 18 months, a rate 130 times higher than the average Chicagoan.
In this scenario, the antagonist does not matter. Suffering is relieved and because a shooting is not a strictly determined event, it does not get displaced (necessarily) to another person. This seems like a win. Reduced police involvement and a reduction in shootings. There is still a high financial cost, but that should be compared to the cost of an individual shot: either death, and the cessation of economic activity, or disability with the reduction of economic activity.1
I wonder how many other technologies or structures could be “reversed” like an artillery piece becoming a cloud seeder or spraying a cluster of sunflower seeds rather than bombs?
[UPDATE]
Using the recorded sounds from the barns as well as sounds made in real time in a live demonstration, the algorithm rapidly and successfully identified 97% of distress calls as the chickens were making them, distinguishing these from other chicken sounds and from general barn noise, the team reports today in the Journal of the Royal Society Interface.
My prediction is that AI and ML will not be, in our lifetimes (say the next 100 years), the equivalent to a human mind or a particularly keen animal. This report in Science is a step towards AI/ML achieving a level of animal understanding very soon. Can a computer or network determine, without opportunity for doubt, a gunshot in a city or a chicken in distress? No - but it can identify a likely victim or a likely chicken in distress and send an intervention within a reasonable standard. I wonder if the chick-distress beacon has been trained on just “industrial” chicken sounds or on free-range chickens as well. Do chicks raised in a barn with stacked pens know when they are in distress to a different definition or degree than a chicken in the wild? Transfer the question to humans and the shooting victims study. Is the algorithm narrowly focused on potential victims in a certain type of setting and ignorant of other social settings? Is its accuracy maintained in rural Nebraska or Alaska as well as urban Chicago and LA? How much tuning to achieve effectiveness in Mumbai? Lagos? Buenos Aires? Are the resources better directed to existent structures - social welfare services currently on the ground?
Is the chick better protected by the algorithm or a mother hen?
-
plus money and economic activity are fictions that we agree to participate in and which have no value outside our involvement and complicit agreement. ↩