Using voice printing for authentication purposes – takeaways from the Rogers decision
Faced with ever-increasing risks of fraud, a growing number of organisations, including banks and telecom companies, are considering implementing voice-printing technologies to authenticate their customers. A voiceprint is a digital model of an individual’s unique vocal characteristics and is considered biometric data. As such, it can be used as an “ audible fingerprint” to identify or authenticate a person, using a biometric analysis. Contrary to passwords and traditional identifiers, which are increasingly subject to data breaches and hence available to threat actors, voice printing technologies rely on biometric information which are by nature unique to an individual and can hence provide an enhanced level of security.
However, voiceprint authentication technologies can also be perceived as a privacy-intrusive practice by concerned individuals, as evidenced by a decision issued earlier this year by the Office of the Privacy Commissioner (OPC) against Rogers Telecommunications Inc. (Rogers) (PIPEDA Findings #2022-003).
In addition to regulatory risks, we note that the unlawful use of biometric technologies is a fertile ground for class actions, as illustrated by the numerous class actions filed recently in the Southern District of California against banks using voice printing for customer authentication purposes and by the first decision on the merit applying the Illinois Biometric Information Privacy Act, which pertains to the use of employees’ fingerprints for identification purposes (awarding USD 228 million in damages against their employer).
This content has been updated on May 2, 2024 at 12 h 56 min.