The insurance industry is, by nature, conservative. It tends to be a late adopter of technology. So it probably says something about A.I.’s maturity as a technology that it is now making rapid inroads with many of the insurance industry’s biggest players.
Tractable, a U.K.-based company that uses computer vision to help insurers assess damage from photos, is a prime example of this trend. The company started out with auto insurance, first in Europe and now in the U.S., where its customers include Geico, Hartford Financial, and others. It has also begun moving into home insurance, helping insurers rapidly process claims following natural disasters. It has transitioned from focusing solely on computer vision to increasingly incorporating natural language processing to pull information from documents. Other companies developing such systems include insurer USAA which has worked with Google’s Cloud computing unit to develop its own photo-based damage estimation system.
Tractable recently announced a partnership with American Family Insurance, one of the largest property and casualty insurers in the U.S. American Family is using Tractable’s A.I. to streamline a process called subrogation. Subrogation is how an insurance company tries to recover from another insurer money that it paid out for a claim. For instance, let’s say Bob runs a stop sign and plows into Mary, who also happens to be speeding. Both their cars are damaged. Bob and Mary claim on their respective insurance, which pays them out to repair the damage. Then, later, Bob’s insurer and Mary’s insurer negotiate with one another to try to recoup some of the cost of the claim. That’s subrogation. (Tractable is not the only company trying A.I. in subrogation. Rival Klear.ai, which also offers a variety of A.I. solutions for the insurance industry, offers a subrogation product too.)
Julie Kheyfets, Tractable’s vice president and general manager for North America, says that a typical subrogation claim might involve a 100-page file thick with photographs and descriptions of damage and repair invoices, loaner car information, and more. Poring through the document can take a human claims expert hours. Tractable’s A.I. can ingest the file and produce an assessment of whether the subrogation claim seems reasonable in minutes. Tractable’s A.I. has been trained on tens of millions of photos of auto damage, and it can instantly spot inconsistencies, such as someone who claims they were rear-ended when in fact the damage to the car is more consistent with the driver hitting a tree.
That means the humans employees who process claims have “more time and emotional bandwidth” to deal with customers, Chris Conti, chief claims officer at American Family, said in a statement. That should ultimately improve customer satisfaction with the claims process. Unspoken, as in many conversations about A.I. augmenting humans, is that by allowing a company to potentially process more claims with the same number of employees, there is a labor cost savings. It isn’t that the A.I. software eliminates jobs per se, but that additional people don’t necessarily have to be hired to handle a larger claims volume.
Using Tractable also means, Kheyfets says, that the subrogation claims are assessed in a consistent manner. A drawback of humans is that experts often assess damage differently from one another. “Estimating damage is quite subjective,” she says. “Two appraisers or body shops give different numbers for the same damage.” Claims adjustors can even be inconsistent in how they judge similar damage between two different subrogation files. Catch an appraiser at the end of a long day and a dent that might have been a $1,000 claim in the morning, is suddenly only a $700 one. As well as time-savings, consistent damage assessments is one of the main selling points of using Tractable for insurance companies, Kheyfets says. Consistency helps make total claims more predictable. It can also save time on quality control audits and potentially reduces back-and-forth over costs between the two insurance companies involved in the subrogation.
Right now, the subrogation negotiation still takes place between two humans. But Kheyfets anticipates that in the not-too-distant future, one insurance company’s A.I. system might simply talk to the other insurance company’s A.I. system, and they will settle the claim between them automatically.
With that, here’s the rest of this week’s A.I. news.
A.I. IN THE NEWS
DeepMind uses A.I. to predict a structure for almost every protein known to biology. As I reported in last week’s special edition of Eye on A.I., DeepMind, the London-based A.I. company that is owned by Alphabet, used its AlphaFold A.I. system to produce predicted structures for almost every protein known to biology. The development is a major advance for basic science, and may ultimately accelerate drug discovery, research into cancer and genetic diseases, and lead to big advances for agriculture and sustainability.
Palantir extends A.I. contract with U.S. Army. Palantir, the data analytics software company, has extended its contract with the U.S. Army Research Lab in a deal worth just under $100 million over two years. The contract will see Palantir continue to develop A.I. technology for the U.S. Army’s combatant commands, according to a company statement. It began working with the U.S. Army Research Lab in 2018.
British super market chain under fire for use of facial recognition technology. The Southern Co-Op chain, which has stores throughout the south of England, has been accused by privacy watchdog Big Brother Watch of “Orwellian” and “deeply unethical” uses of facial recognition technology in a complaint the group filed against the supermarket with the U.K. Information Commissioner’s Office, my Fortune colleague Alice Hearing reports. The privacy watchdog says the company is harvesting people’s biometric data without consent and building opaque “watch lists” of potential shoplifters and others it doesn’t want in its stores. The company told Hearing it “would welcome any constructive feedback from the ICO as we take our responsibilities around the use of facial recognition extremely seriously and work hard to balance our customers’ rights with the need to protect our colleagues and customers from unacceptable violence and abuse.”
Accident leads to questions about technology at self-driving truck company TuSimple. The Wall Street Journal reports that government investigators are asking tough questions about TuSimple’s A.I.-enabled self-driving trucks after one of the vehicles was involved in a single vehicle crash on a major highway in April. The paper reported that “An internal TuSimple report on the mishap, viewed by The Wall Street Journal, said the semi-tractor truck abruptly veered left because a person in the cab hadn’t properly rebooted the autonomous driving system before engaging it, causing it to execute an outdated command. The left-turn command was 2 1/2 minutes old—an eternity in autonomous driving—and should have been erased from the system but wasn’t, the internal account said. But researchers at Carnegie Mellon University said it was the autonomous-driving system that turned the wheel and that blaming the entire accident on human error is misleading. Common safeguards would have prevented the crash had they been in place, said the researchers, who have spent decades studying autonomous-driving systems.” TuSimple told the paper it has since made modifications to its systems to prevent a similar accident. Nonetheless, the crash is a serious setback for TuSimple and potentially the entire self-driving truck ecosystem.
Artist using OpenAI’s DALL-E to redesign city streets. Zach Katz, a Brooklyn, New York-based artist has been feeding images of various streetscapes in the U.S. to DALL-E, the impressive image generation software built by OpenAI, and asking it to reimagine the photographs with streets that are more pedestrian- and public transport-friendly, according to a Bloomberg News story. Side-by-side examples of the original street view and the DALL-E redesigns have gone viral on social media. It’s a good example of how DALL-E is becoming a powerful tool for creativity and design work and may be a harbinger of future uses of such technology. OpenAI recently took steps towards offering DALL-E as a commercial product. Previously it was only available to a select group of pilot users for free.
India using A.I. to help keep an eye on endangered tiger populations. The BBC says rangers in the country’s national parks have begun to use computer vision technology to help automatically catalogue and count tiger images captured by trail cameras deployed throughout the country’s tiger reserves and national parks.
EYE ON A.I. TALENT
Brain Corp., the robotics company based in San Diego, CA, has named Michael Spruijt its new chief revenue officer, according to a story in trade publication Robotics Tomorrow. Spruijt was previously Brain Corp.’s senior vice president, international business.
Sigma7, the New York-based cybersecurity and risk services company, has named Jennifer Gold its chief technology officer, the company said in a press release. Gold had previously been a technology consultant to J.P. Morgan Chase & Co, as well as vice president of engineering at AQR Capital Management.
EYE ON A.I. RESEARCH
Teaching A.I. to think about what could go wrong. Reinforcement learning is a powerful way to train A.I. systems, in part because it enables the software to find good strategies for achieving some goal that humans have never conceived. Increasingly, reinforcement learning is making its way into business through more powerful simulators, including so-called digital twins, in which an entire operation (often a factory or warehouse) is simulated.
But a big problem with reinforcement learning is that while it will learn the best strategy for any given situation, it often won’t take into account the potential risks if it gets the probabilities wrong and something unexpected happens. For instance, if running a particular machine in a factory at its maximum speed has a 99% chance of resulting in optimal production for the entire factory, but a 1% chance of causing the machine to explode, an A.I. naively trained with reinforcement learning might still think that running the machine at maximum speed was the best strategy—even if the consequences of the machine exploding would be catastrophic. This is also a particular problem in scenarios that are adversarial—where a person or another A.I. is specifically looking to exploit weaknesses in an opposing system. Here the adversary has an incentive to try unusual, low-probability actions in an effort to find the A.I.’s weaknesses.
Trying to use reinforcement learning to train an A.I. to both find a good strategy and avoid worst case outcomes has been technically difficult. But a group of researchers from DeepMind and the University of Alberta have now come up with a way to make reinforcement learning algorithms more robust to worst case outcomes. They did so by building on some work other researchers had done looking specifically at A.I. trained to play poker, but then generalizing the insights from this to other domains. You can read the research paper, which was presented at the International Joint Conference on Artificial Intelligence in Vienna, here.
FORTUNE ON A.I.
Supermarket chain under fire over its use of ‘Orwellian’ facial recognition technology and ‘secret watch-lists’ to cut crime—by Alice Hearing
Google’s AI chatbot—sentient and similar to ‘a kid that happened to know physics’—is also racist and biased, fired engineer contends—by Erin Prater
Mark Zuckerberg ignores objections, says Instagram will show twice as much A.I.-recommended content by end of 2023—by Chris Morris
A.I. is rapidly transforming biological research—with big implications for everything from drug discovery to agriculture to sustainability—by Jeremy Kahn
Will deep learning ever be able to learn symbolic logic? That question is the subject of heated debate among A.I. researchers, cognitive psychologists, neuroscientists and linguists. In the current issue of Noema, the magazine of The Berggruen Institute, Yann LeCun, a famous pioneer of deep learning and New York University professor who is now the chief A.I. scientist at Meta, and Jacob Browning, a postdoc student in computer science at NYU who specializes in the philosophy of A.I., provide an overview of the current state of the debate.
The essay has attracted a lot of attention on social media from both sides of the argument. LeCun is known to be in the camp of those who think it is possible that deep learning systems will one day be able to learn symbolic logic, which underpins any real understanding of mathematics, language, and a lot of common sense reasoning. But he is less dogmatic and more circumspect than some other deep learning pioneers such as Geoff Hinton and his former student Ilya Sutskever, now the chief scientist at OpenAI, who are absolutely convinced that simply scaling up today’s neural network architectures will be enough to eventually deliver symbolic logic too.
On the other side of the debate are cognitive psychologists such as former NYU professor Gary Marcus and many others who see strong evidence that in people—and to some extent in animals too— symbolic logic is innate, not learned. This camp thinks that the best way to imbue A.I. with symbolic reasoning is to create hybrid systems that combine deep learning for perception and hard-coded symbolic A.I. for a lot of reasoning tasks. Alternately, they argue that a completely different approach to A.I., other than deep neural networks, will be needed to equal or exceed human intelligence.
Spoiler alert: in the end, LeCun and Browning come down on the side of deep learning and against hybrid approaches. But the essay is an excellent primer on the state of the debate and worth a read and a think.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.