A Fortune report details how Siri could be used by phishers and scammers to bait unsuspecting users. The method relies on how Siri tries to identify unknown contacts, though the bait is easy to discover if you pay close attention to the details.
The cybersecurity firm which carried out the demo of the exploit explains that there are two ways of doing it. Firstly, the attacker will send someone a spoofed email from an impersonated account which must contain a phone number. If the receiver of the email ends up replying to that email, even if it is an automatic out-of-office reply, Siri will start identifying the sender of the email with that name.
There are two ways to pull off this social engineering trick, Mack told me. The first involves an attacker sending someone a spoofed email from a fake or impersonated account, like “Acme Financial.” This note must include a phone number; say, in the signature of the email. If the target responds—even with an automatic, out-of-office reply—then that contact should appear as “Maybe: Acme Financial” whenever the fraudster texts or calls next.
Another way of carrying out this hack is through text messages and it is even easier. If the sender of an iMessage identifies itself as a proper noun in the message, Siri will suggest that contact as “Maybe: <Insert proper noun here>.”
Attackers can use this disguise to their advantage when phishing for sensitive information. The next step involves either calling a target to supposedly “confirm account details” or sending along a phishing link. If a victim takes the bait, the swindler is in.
Siri is smart enough to not suggest a contact when there are words like “bank” or “credit unions” in the email or message. However, it can be easily fooled if one ends up using the name of a financial institution.
Apple has already been informed of this issue on April 25th, though it replied back a week later saying it did not consider this to be a “security vulnerability.”
Apple’s response might not be appreciated by all but it does make sense. There’s little the company can do if hackers and scammers come up with unique ways to take advantage of its AI-powered features in a malicious way. What do you think though? Do you think Apple could do something here to avoid this from happening?