Text deepfake is anything that is textual on the internet or media that is AI manipulated. Subscribe to our new cybersecurity podcast, CYBER. Due to the importance of having trustworthy evidence in court, audio forgery detection has been one of the key issues in the forensics profession. Audio deepfake is AI-generated or AI-edited speech to sound as real. “Criminals and potentially broader nation state actors also learn from each other, so as these high-profile cases gain more notoriety and success, we anticipate more illicit actors trying them and learning from others who have paved the way.” “The ability to generate synthetic audio extends an e-criminal’s toolkit and the criminal at the end of the day still has to effectively use social engineering tactics to induce someone into taking an action.” NISOS wrote in its report. Have you ever received or heard an attempt to use a deepfake audio to defraud a company? If you have any tips about the use of deepfake technology, using a non-work phone or computer, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 9, OTR chat at or email though this deepfake audio got caught, Volkert, Badlu and their colleagues believe this is a sign that criminals are starting to experiment with the technology, and we may see more attempts like this one. Audio and video formats are getting more and more convincing. “But it doesn’t sound like the CEO enough.” Deep fakes give attackers the ability to accurately create convincing media that could lead employees to take dangerous actions. They checked that box as far as: does it sound more robotic or more human? I would say more human,” Volkert said. In other words, he said, this was just step one of a presumably more complex operation that was relatively close to being successful. Rob Volkert, another researcher at NISOS, said they think the criminals were trying the technology out to see if the targets would give them a call back.