This feature is extremely insecure now that there’s several AIs that can replicate voices. If a scammer calls you and you say a few words (like if you say “hello” and “sorry, I think you’ve got the wrong number”), a recording of that can be enough for them to replicate your voice.
It honestly wasn't really that secure to begin with, since the audio would have the daylights crushed out of it through the phone system. Though AI probably makes it easier by just letting you have a computer at the end of it spit out some words.
Someone could probably get away with it by sounding vaguely enough like the person calling.
Or just do the tried and true method of going through the in-person support. Voice recognition, at least in my experience, over the phone, has trouble with accents, so someone calling to get around that isn't uncommon. It never works with me, for example, it just goes "please try again" until it redirects me to an agent.
It honestly wasn't really that secure to begin with, since the audio would have the daylights crushed out of it through the phone system. Though AI probably makes it easier by just letting you have a computer at the end of it spit out some words.
Someone could probably get away with it by sounding vaguely enough like the person calling.
Or just do the tried and true method of going through the in-person support. Voice recognition, at least in my experience, over the phone, has trouble with accents, so someone calling to get around that isn't uncommon. It never works with me, for example, it just goes "please try again" until it redirects me to an agent.