Back

Could face and voice recognition become the new 'phrenology'?

134 points3 yearscbc.ca
jamal-kumar3 years ago

I had clearview AI try to hire me when nobody in the wider public knew about them and that was pretty much how I felt about it at the time when I turned that offer down. I'm glad to see it being talked about now.

I like to call it inspector gadgetarianism. Cops LOVE this shit. They do all sorts of practices like denying that you can easily pass lie detection testing, penile plesmography (like if they want to determine a potential recruit is a pervert they hook their meat up to this machine and show them shit like dudes copulating with chickens), and that time the us military got those ridiculous fake bomb detectors that did literally nothing.

It's a miscarriage of justice but that's what you get when you have an organization that does stuff like not even letting you in if they think you're too smart https://abcnews.go.com/US/court-oks-barring-high-iqs-cops/st...

DonHopkins3 years ago

Penile plesmography isn't fair to people who are easily aroused by having electrical devices attached to their penises.

https://en.wikipedia.org/wiki/Erotic_electrostimulation

runnerup3 years ago

Using an alt-account for this as I don't normally post on non-technical or potentially controversial topics.

I'm working on a product today which includes a PPG sensor, on an innocuous part of the body (e.g. wrist, etc.). While we expose many of the sensors to developers via an SDK API, the PPG sensor is heavily processed (downsampled, in a sense) to minimal information such as heart rate. This is due to concerns about application developers being able to use the raw PPG to create health inferences and/or use it as a way to identify/track individual users of our device.

One of our demos was to a defense company and an engineer there was appalled at all the sensors we were trying to utilize. He said something to the effect of "No way I'm wearing that. You could do a lot of fucked-up shit with that. I would know, because I'm a fucked-up guy and my job is making that fucked-up shit."

He followed up with: "Could you use that PPG sensor to see if I have an erection?" and the question was drowned away by laughter across the room,

One of our data scientists who works closely with the PPG sensor models told me in private later: "I actually thought that was a really good question. I thought about it for awhile, and I think the answer is yes -- we could." I don't know if he's right -- the data we get from the PPG sensor is pretty dirty especially whenever the wearer is moving around. But our signal processing pales in comparison to what Apple is able to achieve, so maybe someone better could. Or maybe not, it's not something we're spending engineering time to determine!

jamal-kumar3 years ago

I mean I did have a good chuckle at your story but... I gotta ask, what are the actual useful applications for what you are building?

I have to step back from a ton of opportunities (Like I said, Clearview AI rang some alarm bells) because it wasn't the right thing to do. Some guy yesterday tried to rope me into one of those land banking scams and I actually sent him goatse and told him to delete my number, that's how far I want people with less than good ideas around me. Can't even pay me enough to violate my principles.

runnerup3 years ago

In this case the PPG sensor is pretty peripheral to the value proposition, which is why we can reduce it to just heart rate and still satisfy the needs of our customers (application developers). There are a number of other biometrics used which are more salient for what we're looking at, which is generally related to training people on new skills or relearning existing skills differently than they were originally learned.

Most of the applications so far are non-defense and pretty non controversial.

I'd genuinely be happy to talk about it with you offline but would rather not publically associate the product with that story because the product is pretty cool overall, and I'm a major technology skeptic in general so it takes me a lot to decide something is potentially valuable and not evil.

Also the main misunderstanding there was the point: "No, our API does not expose that raw PPG sensor data so no application developer could make that inference".

In terms of stepping back...I agree. I've done that on previous projects, left companies or forced gross changes in architecture to satisfy my conscience.

It really helps this time that I'm working with a lot of people who aren't afraid to share stories like this one and talk about them candidly, as well as share links on public internal Teams groups about all kinds of privacy concerns over biometrics (theres no fear of creating a paper trail which shows "we knew or should have known"). As we are building the "platform" rather than the applications, everyone genuinely seems to feel it's important to limit the harm that third party developers can cause. And for now at least, the product is actually quite hobbled by erring towards safety.

That said when you ask "what are the useful applications", that's a very difficult question for me to answer right now. Mainly I think the platform has a ton of potential, but each third party application development probably needs a lot more psychology doctoral level research to properly apply the platforms capabilities for each third party applications specific situation. And in lieu of that, developers have to either get lucky or have incredibly good intuition about the domains and people that they're creating applications for.

DonHopkins3 years ago

Just to be clear, the first "P" in "PPG" stands for "Photo", not "Penile", right?

https://en.wikipedia.org/wiki/Photoplethysmogram

But to be fair, there's also VPPG and CPPG:

https://en.wikipedia.org/wiki/Vaginal_photoplethysmograph

https://en.wikipedia.org/wiki/Clitoral_photoplethysmograph

So maybe Apple should call their touch sensing and haptic feedback technology the "Faptic Engine", since the Apple Watch can detect when and how vigorously you're masturbating? (Apple Watch users: Do you take it off, or use the other hand, or is there "an app for that"?)

https://www.webopedia.com/definitions/taptic-engine/

>A combination of “tap” and “haptic feedback,” taptic engine is a name Apple created for its technology that provides tactile sensations in the form of vibrations to users of Apple devices like the Apple Watch, iPhones, iPads, and MacBook laptops.

>Apple debuted its taptic engine technology along with the Force Touch feature in the 2015 editions of Apple MacBook notebooks and the initial Apple Watch, and the two features work together to provide a user with more input control and feedback.

https://appleinsider.com/articles/16/09/27/inside-the-iphone...

>Apple has moved the Taptic Engine to its third device — the iPhone 7. The new technology replaces the older linear actuator, and will ultimately bring a world of force feedback sensations to the user, as developers implement the technology in their own apps.

>The Taptic Engine is Apple's implementation of haptic user interface feedback. Using a linear actuator, devices like the iPhone 7 can reproduce the sensation of motion or generate new and distinct tactile experiences. In some cases, audio feedback from onboard speakers completes the illusion.

https://www.patentlyapple.com/patently-apple/2020/01/apple-w...

>Apple Won 39 Patents Today covering an Enhanced PPG Sensor System for Apple Watch, 3D-Haptic Touch & more

Maybe there's another application (and nickname) for the Tap Strap 2:

https://www.tapwithus.com/

Tap Strap ASL (tracking the raw Tap Strap 2 device data and recognizing American Sign Language with machine learning):

https://www.youtube.com/watch?v=sqjacp5Pqs0

runnerup3 years ago

Yes, in our case -- and in the general case, PPG can be assumed to stand for Photoplethysmogram. Also I'm not aware of any examples where "Photoplethysmogram" was used as a "Penile (PHOTO)-plethysmography", generally what I've read on "Penile plethysmography" are more direct measurements of the penis itself, not optical measurements of blood flow elsewhere on the body.

Because of that, I don't think it's reasonable to call Apples "touch sensing and haptic feedback technology the 'Faptic Engine' " at this time! but it is a funny pun!

We merely entertained the idea that it might be possible to use machine learning to infer erection status from a ppg sensor somewhere else.

Spare_account3 years ago

>that time the us military got those ridiculous fake bomb detectors that did literally nothing.

https://en.wikipedia.org/wiki/ADE_651

Are you referring to the ADE 651? I wasn't aware that it was sold to the US military.

jamal-kumar3 years ago

Oh yeah, not that one but the history of this fraud goes BACK some years. Check this little piece of inspector gadgetarianist nonsense that goes all the way back to 1996

https://en.wikipedia.org/wiki/Quadro_Tracker

I don't really know what the people who make these are thinking, it seems like a really bad idea to me to scam the only people who can themselves legally go after and detain you

Couple of other examples:

https://en.wikipedia.org/wiki/Alpha_6_(device)

https://en.wikipedia.org/wiki/GT200

voxic113 years ago

That particular device wasn't. But other very similar devices have been purchased by the US military.

> The Navy's counterterrorism technology task force tested Sniffex and concluded "The Sniffex handheld explosives detector does not work." Despite this, the US military bought eight for $50,000.

https://en.wikipedia.org/wiki/Sniffex

DonHopkins3 years ago

Apparently they have seat pads for lie detectors to detect anal clenching, a technique for fooling lie detectors.

https://lafayettepolygraph.com/products/seat-activity-sensor...

>Activity Sensor Seat Pad for LX4000, Model 76879S. Price: $ 475.00*

https://news.ycombinator.com/item?id=20312004

>Does the technique of clenching your butt hole actually work to beat a polygraph? (As described in my favorite "The Americans" season 2 episode 7, "Arpanet".) Does it also help to visualize someone you love at the same time? ;)

If it actually works, the photo of Charles Wayne Humble in the article looks like he could lie through his teeth while passing a polygraph exam with flying colors!

https://en.wikipedia.org/wiki/Arpanet_(The_Americans)

After consulting with Arkady and Oleg, and with the promise of coaching from Oleg, Nina tells Stan that she will take the FBI's polygraph test. Oleg suggests a few techniques including that she visualize him in the room as well as clenching her anus.

https://www.youtube.com/watch?v=s3OMSMq9zPA

The Americans Season 2 Episode 7 "Arpanet" Review: "I like when I learn something from an episode, and now I know ..." "If you're having to do a polygraph, squeeze your anus."

https://tv.avclub.com/the-americans-arpanet-1798180091

(One of the ways to beat a polygraph turns out to be clenching one’s anus. This show is full of helpful hints.)

>https://www.esquire.com/entertainment/tv/a28316/spy-on-the-a...

>ESQ: The show featured Nina learning to beat and eventually beating a polygraph. How easy is it to do that?

>PE: We have a number of real-world instances. The Aldrich Ames case. He went through the polygraph twice, after he went to work for the Soviets. Administering a polygraph is an art, not a science. That's why it's not admitted in court. People have claimed to have had training to beat the polygraph. Everything from tightening your sphincter to breathing a certain way, and so forth.

>ESQ: Speaking of sphincters, the trick she's told is "squeeze your anus." Is that a thing?

>PE: I can't confirm. [Laughs] I took several polygraphs. Taking them is a standard thing in the intelligence life.

https://news.ycombinator.com/item?id=20311928

>Or if you deliberately clench your anal sphincter, that's enough to change results. And yes, that actually works; that's why some lie detectors now feature ass-clenching detention technology.

jamal-kumar3 years ago

Not going to lie, all i can think of reading your comment is the book "How to Good-bye Depression: If You Constrict Anus 100 Times Everyday. Malarkey? or Effective Way?"

https://www.amazon.com/How-Good-bye-Depression-Constrict-Eve...

Good read for all them anus clencher fanatics I suppose

mc323 years ago

That seems like a decoy. Make people rely on it and then miss something else. Why couldn’t they just bite their tongue if you just need overriding stimulus?

Moreover that assumes a premise where a lie detector can actually detect lies in the fist place.

wumpus3 years ago

I have a friend who's terrified of this new phrenology. She's missing some muscles in her face, such that she blinks weirdly. She can only mostly close her eyes, thanks to an operation in her childhood that took a muscle from her butt and placed it diagonally on her forehead. At best, her blinks are asymmetrical. At night, she has to sleep with dark cloth over her eyes because she can never fully close them.

Her terror revolves around being profiled as being overly nervous at airports. Of course, that worry makes her actually nervous at airports.

Permit3 years ago

This article is talking about two very different things:

1. Using video/audio of a person to make statements about their personality.

2. Using video/audio of a person to identify them.

The former seems as though it’s largely pseudoscience and should be avoided on the basis that it simply does not work.

The latter may be innaccurate, biased and problematic but it does not seem to qualify as pseudoscience. I would imagine such systems will continue to improve in accuracy. Do others really consider facial recognition and voice recognition (as the title suggests) to be like phrenology?

akiselev3 years ago

> Do others really consider facial recognition and voice recognition (as the title suggests) to be like phrenology?

I won't jump to that conclusion but I'm skeptical. Phrenology seemed like it worked (to some) until real rigor and statistically significant sample sizes put it to rest. I suspect that may happen to facial and voice recognition too - if limited to a small, known group size or heavily controlled circumstances it works well, but the second you apply it to nontrivial real world populations over time it loses most of its predictive power. Obviously even in limited applications the technology is far more useful than phrenology ever was, just going by the number of people using Face ID.

At the end of the day, the predictions facial/voice recognition algorithms make are far more specific and obviously testable than phrenology ever was, but we don't even have conclusive evidence that humans can precisely identify individuals using only sight or sound without a mental context that would approach a general AI. Even parents, for example, can have trouble differentiating identical twins without contextual clues like personality traits, schedule, or preferred fashion.

PeterisP3 years ago

IMHO it all relates to what do you mean by "precisely identify" - if we're talking about automating some process that's currently done by humans, then it does not to be able to distinguish identical twins, it only needs to be roughly comparable to what the humans can do. And the "benchmark human" is not a close relative using contextual clues, it's some bored overworked clerk who'd otherwise be looking at the same pictures.

wearywanderer3 years ago

I think whether or not these methods are pseudoscience is irrelevant. Why?

Even if phrenology had worked most of the time, it would still be a terrible idea. Even if phrenologists could classify criminals with 99% accuracy from the shape of their skull, they'd still be screwing over a ton innocent people when their methods were applied to large populations. Putting somebody in prison because of the shape of their skull is a terrible proposition, even if you get it right more often than not.

wruza3 years ago

That’s sad, but we already do lots of statistical analysis in good or bad ways. It would only help to shed more light on bad ways if we digitize it further. It’s unclear to me if what you assume about that digital racism software isn’t already there (e.g. detectives who prioritize suspects by skull shape or HRs doing the same). The fact that we cannot see it doesn’t make it less terrible.

Edit: I’m basing on a premise that working with a lot of subjects, a human inevitably creates a structure in their brain similar to what we discuss (professional deformation is a thing). I bet that it’s often even worse than $subj because a human mind tends to simplify its job for energy consumption reasons.

wombatpm3 years ago

If it worked there could have been a effort around Applied Phrenology where bumps are added to peoples skulls to improve behavior

mlang233 years ago

I feel the same way about automatic driving.

Even if the accuracy and error rate surpass humans, every death due to an AV failing is one death too much. However, most people seem to be happy with a failure rate lower then my grandma...

zdragnar3 years ago

Human eyewitness testimony is also imperfect. How often do you see someone or hear a voice that you think you recognize only to find out it was someone else?

What you are arguing is that voice and face recognition are to be accepted as absolute fact, which is not how they ought to be treated. They shouldn't be considered any more reliable than a human, and any use otherwise should be discouraged. Don't throw the proverbial baby out with the bathwater.

shmageggy3 years ago

Human eyewitness relates to an event. "Did you see this person on the night of January 14" etc. Phrenology is based on immutable characteristics of a person's physiology. Predicting criminality based on any immutable feature seems categorically wrong to me. If facial and voice recognition are used in relation to specific events, that's one thing, but using them to predict some sort of innate propensity for doing crime sounds like pure bathwater.

wumpus3 years ago

You might want to check out the bit in the article mentioning the study that claimed you could figure out a person's sexual orientation from their photo.

There are also many well-known algorithms that attempt to predict how nervous or angry a person is from a photo.

Edit: as for criticisms of image recognition systems themselves, they have different false positive rates for different skin colors.

fighterpilot3 years ago

What is your basis for saying that (1) is pseudoscience? A mere photo can be used guess political preferences fairly well. Why not personality?

wearywanderer3 years ago

It may not be pseudoscience. But even if the methods are accurate, it is still prejudice.

> (countable) An adverse judgment or opinion formed beforehand or without knowledge of the facts.

> (countable) Any preconceived opinion or feeling, whether positive or negative.

fighterpilot3 years ago

Suppose for argument's sake that it is fairly accurate.

What would then be the argument that we shouldn't use this information to, say, set insurance premiums, when we already happily accept the use of other prejudicial information such as age and gender to do so?

+1
wearywanderer3 years ago
pjc503 years ago

Who's "we"? An outcome of EU gender equality law was that car insurance premiums may no longer depend on gender even though it is correlated with risk.

etripe3 years ago

> A mere photo can be used [to] guess political preferences fairly well

Really? Could you provide me with a link? It seems counter-intuitive. I look pretty much like my barber, but he's right-wing tradcon and I'm oldskool left-wing.

mattkrause3 years ago

Nope, but your example nails it.

It’s fairly straightforward to guess someone’s age, gender, race and maybe some cultural markers from a photograph.

Those demographic factors are correlated with people’s political views: old white men tend to be Republican, folks with rainbow hair tend to skew liberal, etc.

On a group level, this seems to let your predict political views, but there’s no mechanism that lets it work for an individual.

+1
fighterpilot3 years ago
etripe3 years ago

Thanks, that's very interesting. The other linked studies are, too.

DonHopkins3 years ago

A more precise term of art is "Speaker Recognition" as opposed to "Voice Recognition", so as not to be confused with "Speech Recognition".

https://en.wikipedia.org/wiki/Speaker_recognition

>Speaker recognition is the identification of a person from characteristics of voices. It is used to answer the question "Who is speaking?" The term voice recognition can refer to speaker recognition or speech recognition. Speaker verification (also called speaker authentication) contrasts with identification, and speaker recognition differs from speaker diarisation (recognizing when the same speaker is speaking).

https://en.wikipedia.org/wiki/Speaker_diarisation

>Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy and learned behavioral patterns.

It's actually been used in criminal cases:

>Speaker recognition may also be used in criminal investigations, such as those of the 2014 executions of, amongst others, James Foley and Steven Sotloff.

https://www.theguardian.com/media/2014/sep/02/steven-sotloff...

>An investigation is under way to establish whether the man dubbed Jihadi John is behind a second murder after a British-accented man was shown in the video depicting the killing of Steven Sotloff.

>Security sources said that although there were similarities between the voice on the film that emerged on Tuesday and that depicting the murder of James Foley a fortnight ago, the figure is largely hidden in black clothing.

>A man with a British accent was seen in an Islamic State video posted last month in which the American journalist James Foley was beheaded. He was dubbed Jihadi John after one former hostage spoke about three Britons, collectively know as the Beatles, who were among their captors, one of whom went by the name of John. In both videos, the speaker has what appears to be a London accent.

mistrial93 years ago

some theory of social identity builds a construct like .. the more socially important (for whatever reasons) the person is, the more detail and currency are in the ID or profile, by many measures. It would apply here like - a lot of detail in identifying a television personality going through your security gates, and as a side-effect a lot of pressure on a person that happens to look a lot like that personality; but ordinary people of ordinary means would have both less detail overall to ID them, and more errors in that, overlapping with others.

Thereby, you would get many effects, like people who fit certain demographic and cultural slots at whatever place and time, get a lot of false positives due to no fault of most of them. other examples possible..

gentleman113 years ago

For anyone else not sure what phrenology is, the article says:

> In the early 19th century, some scientists became convinced that they could predict someone's personality and behaviour based simply on the shape of their head.

> Known as phrenology, this pseudo-science accelerated notions of racism, intellectual superiority, and caused many to suffer just because of what they looked like—some people were even imprisoned because the contours of their skulls suggested "criminality."

spywaregorilla3 years ago

The problem with phrenology as a policy-making tool isn't that it's largely bullshit (though it was), but that it was, on its principles, unfair. The same is here with this tech. Accuracy isn't the issue, even if this one's also largely pseudo science.

version_five3 years ago

Yes, exactly! Accurate or not, its completely unfair to generalize about people in relation to anything nontrivial.

josephcsible3 years ago

I'd use a slightly different metric for unfairness: it's unfair to use factors that people have no control over to make predictions about things that they do have control over.

fighterpilot3 years ago

If we're running airport security - putting aside the security theatre aspect - do we really want to search a 90 year old female pensioner as often as a 21 year old man, when the former is significantly (as in, 100x thereabouts) less likely to pose a threat to a flight? Isn't that just a waste of resources?

Or consider health insurance. Do we really want young people paying the same premiums as 80 year olds?

Perhaps you think there's a negative societal byproduct of making a certain group of people feel marginalized/targeted. Does wanting to reduce this justify clear resource wastage, such as in the airport security example above?

armchairhacker3 years ago

IMO the main issue is that you're punishing (or at least inconveniencing) the person whose predicted to be bad. The 21 year old man did nothing wrong being a 21 year old man, and just because he's super suspicious doesn't mean he's actually dangerous. And you have to imagine what happens when you are the one who seems super suspicious.

Similarly, 80 year olds and especially people with pre-existing conditions, they did nothing wrong, so it's not fair to make them pay more.

Possible solutions are 1) monitor the suspicious individuals without inconveniencing them (e.g. monitor the 21-year old via a security camera), 2) distribute the inconvenience when possible (e.g. raise everyone's premiums a small amount for those who are more likely to be sick), or 3) offer some sort of compensation to people who have to be targeted.

+1
cartoonworld3 years ago
+1
magicalist3 years ago
Spooky233 years ago

Says who? For what threat?

Recall the Pan Am flight exploded over Scotland. Explosives were concealed in a radio.

Who is to say that grandma isn’t packing such a device?

The threat of a 9/11 style hijacking is largely neutralizes by reinforced cockpit doors. But any person can be a mule, knowingly or unknowingly, for contraband of various types.

+3
josephcsible3 years ago
spywaregorilla3 years ago

> Or consider health insurance. Do we really want young people paying the same premiums as 80 year olds?

Health insurance is a perfect example of intentionally restricting factors that may be considered to make a more fair system. Forget age. Most systems don't require a sick person to pay more than a healthy person even if the sick person is nearly guaranteed to take out more than they contribute.

> Perhaps you think there's a negative societal byproduct of making a certain group of people feel marginalized/targeted. Does wanting to reduce this justify clear resource wastage, such as in the airport security example above?

The extra security is the resource wastage. The flip side of "we can do less for this group" is "we can do more for that group".

Also, your comment here heavily implies you think that resource waste is obviously more important that making large groups of people feel marginalized. To be frank, even from the most sociopathic economic lens, I do think that exhibiting behaviors that perpetuate racist/religious/political/ageist/sexist/sexual orientation biased ideas have significantly higher negative externality costs to society. Especially those that implicitly convey that an outsider group is dangerous.

version_five3 years ago

Why does having no control over a factor have anything to do with it? Is it fair to predict criminality based on haircut but not forehead size? In both cases, you are using a non-causal factor to make a conclusion based on some generalization you've seen in a population. That's what's not fair.

josephcsible3 years ago

Should police not pay special attention to someone with a swastika shaved into the back of his head who they see going into a synagogue?

quotemstr3 years ago

Suppose we lived in an alternate universe where personality characteristics could be inferred (with statistical confidence) from skull bumps (i.e. a world where phrenology worked): in such a world, you could argue that it would be unfair to disadvantage an individual based on his specific skull bumps. Those bumps aren't dispositive in individual cases. At the same time, it'd be wrong (again, in our pretend universe) to ban noticing the statistical connection between skull bump and personality, because that'd just be a fact about how our alternate universe worked, and censoring facts is always wrong.

Back in our world: I don't believe in demonizing technologies because of what people might do with them. Facial recognition works, at least for identification of individuals. I cannot get on board with mandating denial of this fact for the sake of fear of bad policy based on this fact. It never, ever, under any circumstances whatsoever acceptable to ban people noticing true facts about how the world works.

spywaregorilla3 years ago

> I don't believe in demonizing technologies because of what people might do with them.

What about demonizing technologies because of what people do with them, all the time, in positions of power?

> Facial recognition works, at least for identification of individuals. I cannot get on board with mandating denial of this fact for the sake of fear of bad policy based on this fact.

That's not really what this article is about, the article is just poorly titled.

> It never, ever, under any circumstances whatsoever acceptable to ban people noticing true facts about how the world works.

That's fine. In this case you should have no issue that I've noticed people who say stuff like this tend to harbor supremacist ideals.

TchoBeer3 years ago

We wouldn't ban noticing the statistical connection, but we might ban using it in court cases and hiring decisions.

catlifeonmars3 years ago

I’m struggling a bit with the layout of the arguments in this article. In particular, it seems to conflate several related, but different things.

- Face/voice recognition, which is just statistical modeling and threshold testing at the end of the day. - (Mis)use of face/voice recognition to test for things that these technologies aren’t going to accurately measure - Equating the former to phrenology, when it’s a more accurate of an analogy to equate the latter to phrenology.

Overall, I agree with the basic premise that face/voice recognition being misused to apply prejudice is a valid concern.

mikeiz4043 years ago

>"However, he warns that attempts to assess someone's personality from their voice—or even their facial expressions—is fraught with ethical and accuracy issues. "Even though there are physiological signals, it's quite possible that the way we interpret them is very culturally biased," he said."

A relevant article on this which talks about some of the evidence for cultural specific interpretations balanced against existing evidence for their universality: Do Feelings Look the Same in Every Human Face? (https://greatergood.berkeley.edu/article/item/do_feelings_lo...)

Some additional sources as well:

- Misinterpretation of facial expression: a cross-cultural study (https://pubmed.ncbi.nlm.nih.gov/10201283/)

- Perception of Facial Expressions Differs Across Cultures (https://www.apa.org/news/press/releases/2011/09/facial-expre...)

- Are Facial Expressions the Same Around the World? (https://greatergood.berkeley.edu/article/item/are_facial_exp...)

stuntkite3 years ago

A decade ago I left a startup I helped found that did deposition video editing because (among other things) our timestamp provider was offering micro tremor stress (lie) detection in their API and people were VERY interested in using it. Even if you buy into that bullshit, in this instance the source audio was 32kbps mono. There was nothing keeping us from underlying text red, yellow, and green. It wouldn't be admissible as evidence (like a polygraph) but you best believe that shit would be used for awful shit. I left, they never did that, but I bet it's going on somewhere. I get the desire for tools like this, but it's really just a way around doing better work to determine factors. The only thing it's really for is coercion.

ineedasername3 years ago

They don't actually mention any company using this for personality assessment. That would be a useful detail to include. They mention Clearview AI, but only in the context of facial recognition, not personality.

wmf3 years ago

Here's a company that was trying to use an AI lie detector to deny insurance claims: https://www.vox.com/recode/22455140/lemonade-insurance-ai-tw...

Here's a company using AI personality assessment to decide hiring: https://www.washingtonpost.com/technology/2019/10/22/ai-hiri...

pdkl953 years ago

https://www.voiceofsandiego.org/topics/education/college-stu...

> At the self-described “heart” of the company’s monitoring software is Monitor AI, a “powerful artificial intelligence engine” that collects facial detection data [...] to identify “patterns and anomalies associated with cheating.”

> "... people who have some sort of facial disfigurement have special challenges; they might get flagged because their face has an unexpected geometry.”

This is literally phrenology with a bunch of linear algebra instead of calipers.

ineedasername3 years ago

Wow that's awful. There might be some very general common characteristics of body language and facial movements the very roughly correlate to certain general situations. (Panic comes to mind) but those are still going to vary incredibly widely over the population based on so many factors, not the least of which are culture, peer groups, and general personal quirks. A person with a history of panic attacks, for example, may have learned to cope well enough not to completely lose it in a meeting at work. The list of other examples would be endless.

And how do you get a reliable data set for this to begin with? Most schools allow for a formal appeal or inquiry process precisely because anything but very blatant cheating can be a murky area. Even if you had videos and a tagged set of the outcomes of inquiries like that, you still wouldn't have intercoder ratings for reliability and therefore an no verified data set. I know some methods that can be used against untagged data, but at the very least you actually have to know if there was cheating or not.

And talk about black box-- this isn't even something that human review of specific incidents could validate. If a human couldn't reliably come to the correct conclusion based on available data, an AI using pattern matching with a questionable data set sure won't.

When the article mentioned companies using it I figured it was just early stages of testing it. Or used for some low stakes marketing/advertisement targeting. That would still be bad, but this is world's worse. Life ruining worse.

unishark3 years ago

My guess is they're trying to classify behaviors like glancing off in some direction repeatedly to look at a cheat sheet or a phone, the same things human proctors watch for.

Presumably the system can fail to accurately estimate the gaze direction for some people because they are too different from the design assumptions made. i don't really think that's the same as phrenology. It's more akin to BMI being a poor estimate of obesity for bodybuilders.

bitwize3 years ago

Yes, especially since "AI" is the new Paracelsan alchemy.

UnknownEmpire3 years ago

Metadata phrenology has been around for a long time

jdonaldson3 years ago

Could divisive click bait titles become the new journalism?

tlrobinson3 years ago
wmf3 years ago

Always has been.

microdrum3 years ago

Facial recognition works pretty well, as anyone who works in the field can tell you. So it's already different from phrenology. It's even more different from prenology in that facial and voice recognition are search and surveying tools, not conclusion or judgement tools.

prvc3 years ago

The first portion of the audio (comprising all I heard) is just some guy kvetching about Alexa to a clueless but still pompous interviewer. As for "biometrics", per sei, it seems to be a marginally small addition to all the other information channels that are available to the surveillance industry. Not momentous in and of itself, and the concern seems to stem from questionable categorical distinctions (e.g. what you say vs. how you say it; wrong for others to consider the latter, not the former). And, regarding phrenology: its previous incarnation was discredited as a pseudoscience on the basis of its vagueness and potential inaccuracy. Now that it can be implemented using deep learning, its predictions' accuracy quantified and vetted empirically, that criticism no longer applies.