New research into web-tracking techniques has found some websites using audio fingerprinting for identifying and monitoring web users.
During a scan of one million websites, researchers at Princeton University have found that a number of them use the AudioContext API to identify an audio signal that reveals a unique browser and device combination.
“Audio signals processed on different machines or browsers may have slight differences due to hardware or software differences between the machines, while the same combination of machine and browser will produce the same output,” the researchers explain.
The method doesn’t require access to a device’s microphone, but rather relies on the way a signal is processed. The researchers, Arvind Narayanan and Steven Englehardt, have published a test page to demonstrate what your browser’s audio fingerprint looks like.
“Using the AudioContext API to fingerprint does not collect sound played or recorded by your machine. An AudioContext fingerprint is a property of your machine’s audio stack itself,” they note on the test page.