The Mainichi, with no byline:
Fukoku Mutual Life Insurance Co. is planning to slash nearly 30 percent of its payment assessment department’s human staff after it introduces an artificial intelligence (AI) system in January 2017 to improve operating efficiency.
Fukoku Mutual has already begun staff reductions in preparation for the system’s installation. In total, 34 people are expected to be made redundant by the end of March 2017, primarily from a pool of 47 workers on about five-year contracts. The company is planning to let a number of the contracts run out their term and will not renew them or seek replacements.
The insurance firm will spend about 200 million yen to install the AI system, and maintenance is expected to cost about 15 million yen annually. Meanwhile, it’s expected that Fukoku Mutual will save about 140 million yen per year by cutting the 34 staff.
About a month ago, I finished reading Cathy O’Neil’s excellent “Weapons of Math Destruction” and I’m currently midway through “Data Love” by Roberto Simanowski.1 While finding out why an institution has made a particular decision has always been somewhat difficult, both books make the case that offloading a decision to mass data collection and automation can have disastrous consequences that aren’t fully understood. Furthermore, there’s a sense of certainty and finality to a decision made by a computer program — humans can see nuance and context, but a machine typically doesn’t. And, to make matters worse, the specific rationale for a machine’s decision may never be known because the source code is almost always considered confidential.
This is the direction we’re headed in and, while I don’t want this to come off as curmudgeonly, unregulated and proprietary big data programs are making decisions we don’t fully understand or control. That ought to be concerning.
Both of those links are affiliate links. ↩︎