By MIKE MAGEE
How comfortable is the FDA and Medical Ethics community with a new super-charged medical Facial Recognition Technology (mFRT) that claims it can “identify the early stages of autism in infants as young as 12 months?” That test already has a name -the RightEye GeoPref Autism Test. Its’ UC San Diego designer says it was 86% accurate in testing 400 infants and toddlers.
Or how about Face2Gene which claims its’ mFRT tool already has linked half of the known human genetic syndromes to “facial patterns?”
Or how about employers using mFRT facial and speech patterns to identify employees likely to contract early dementia in the future, and adjusting career trajectories for those individuals. Are we OK with that?
What about your doctor requiring AiCure’s video mFRT to confirm that you really are taking your medications that you say you are, are maybe in the future monitoring any abuse of alcohol?
And might it be possible, even from a distance, to identify you from just a fragment of a facial image, even with most of your face covered by a mask?
The answer to that final question is what DARPA, the Defense Advanced Research Projects Agency, was attempting to answer in the Spring of 2020 when they funded researchers at Wuhan University. If that all sounds familiar, it is because the very same DARPA, a few years earlier, had quietly funded controversial “Gain of Function” viral re-engineering research by U.S. trained Chinese researchers at the very same university.
The pandemic explosion a few months later converted the entire local population to 100% mask-wearing, which made it an ideal laboratory to test whether FRT at the time could identify a specific human through partial periorbital images only. They couldn’t – at least not well enough. The studies revealed positive results only 39.55% of the time compared to full face success 99.77% of the time.
Facial Recognition Technology (FRT) dates back to the work of American mathematician and computer scientist Woodrow Wilson Bledsoe in 1960. His now primitive algorithms measured the distance between coordinates on the face, enriched by adjustments for light exposure, tilts of the head, and three-dimensional adjustments. That triggered an unexpectedly intense commercial interest in potential applications primarily by law enforcement, security, and military clients.
The world of FRT has always been big business, but the emergence of large language models and sophisticated neural networks (like ChatGPT-4 and Genesis) have widened its audience well beyond security, with health care involvement competing for human and financial resources.
Whether you are aware of it or not, you have been a target of FRT. The US has the largest number of closed circuit cameras at 15.28 per capita, in the world. On average, every American is caught on a closed circuit camera 238 times a week, but experts say that’s nothing compared to where our “surveillance” society will be in a few years.
They are everywhere – security, e-commerce, automobile licensing, banking, immigration, airport security, media, entertainment, traffic cameras – and now health care with diagnostic, therapeutic, and logistical applications leading the way. (Below is a photo of a mobile Live Facial Recognition project outside a Soccer Match in London November 2023: Photo, Matthew Holt)
Machine learning and AI have allowed FRT to soon displace voice recognition, iris scanning, and fingerprinting. Part of this goes back to Covid – and not just the Wuhan experiments. FRT allowed “contactless” identity confirmation at a time when global societies were understandably hesitant to engage in any flesh-to-flesh contact.
The field of mFRT is on fire. Emergen Research projects a USD annual investment of nearly $14 billion by 2028 with a Compound Annual Growth Rate of almost 16%. Detection, analysis and recognition are all potential winners. There are now 277 unique organizational investor groups offering “breakthroughs” in FRT with an average decade of experience at their backs.
Company names may not yet be familiar to all – like Megvii, Clear Secure, Any Vision, Clarify, Sensory, Cognitec, iProov, TrueFace, CareCom, Kairos – but they soon will be.
The medical research community has already expanded way beyond “contactless” patient verification. According to HIMSS Media , 86% of health care and life science organizations use some version of AI, and AI is expanding FRT in ways “beyond human intelligence” that are not only incredible, but frightening as well. Deep neural networks are already invading physician territory including “predicting patient risk, making accurate diagnoses, selecting drugs, and prioritizing use of limited health resources.”
How do we feel about mFRT use to diagnosis genetic diseases, disabilities, depression or Alzheimers, and using systems that are loosely regulated or unregulated by the FDA?
The sudden explosion of research into the use of mFRT to “diagnose genetic, medical and behavioral conditions” is especially troubling to Medical Ethicists who see this adventure as “having been there before,” and not ending well.
In 1872, it all began innocently enough with Charles Darwin’s publication of “The Expression of the Emotions in Man and Animals.” He became the first scientist to use photographic images to “document the expressive spectrum of the face” in a publication. Typing individuals through their images and appearance “was a striking development for clinicians.”
Darwin’s cousin, Francis Galton, a statistician, took his cousin’s data and synthesized “identity deviation” and “reverse-engineered” what he considered the “ideal type” of human, “an insidious form of human scrutiny” that would become Eugenics ( from the Greek word, “eugenes” – meaning “well born”). Expansion throughout academia rapidly followed, and validation by our legal system helped spread and cement the movement to all kinds of “imperfection,” with sanitized human labels like “mental disability” and “moral delinquency.” Justice and sanity did catch up eventually, but it took decades, and that was before AI and neural networks. What if Galton had had Gemini Ultra “explicitly designed for facial recognition?”
Complicating our future further, say experts, is the fact that generative AI with its “deep neural networks is currently a self-training, opaque ‘black box’…incapable of explaining the reasoning that led to its conclusion…Becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.”
The U.S. National Science Advisory Board on Biosecurity recently recommended restrictions on “Gain of Function” research, belatedly admitting the inherent dangers imposed by scientific and technologic advances that lack rational and effective oversight. Critics of the “Wild West approach” that may have contributed to the Covid deaths of more than 1.1 million Americans, are now raising the “red flags” again.
Laissez-faire as a social policy doesn’t seem to work well at the crossroads of medicine and technology. Useful, even groundbreaking discoveries, are likely on the horizon. But profit seeking mFRT entrepreneurs, in total, will likely add cost while further complicating an already beleaguered patient-physician relationship.
Mike Magee M.D. is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex. (Grove/2020)
2024-04-04 09:08:00