AI agents automate F&I and customer service tasks but cybercriminals can penetrate their vulnerabilities. They are also using AI to launch faster, more sophisticated attacks on customer data.
Ransomware remains an issue with the resurgence of threats such as Akira, a ransomware gang that first struck in April 2023, according to the federal Cybersecurity & Infrastructure Security Agency. It has caused problems for dealerships and others, said Logan Evans, vice president of offensive security at Intelligence Technical Solutions from Las Vegas. Akira started with a dealership employee clicking on a bogus cybersecurity prompt, according to an April report by ConnectWise, a software management company. âItâs hard to give a bill of clean health when youâve had a ransomware attack six months ago, because you never know,â Evans said. Sixty-seven percent of dealerships viewed ransomware as a top threat in 2026, while 66 percent felt the same about email phishing, the CDK study said. AI can make existing attack methods such as ransomware quicker and more intense. Cybercriminals completed attacks in 29 minutes on average last year, 19 minutes faster than the year earlier, CrowdStrike said. The fastest attack was just 27 seconds last year. AI technology is helping to create much more effective phishing emails, said Erik Nachbahr, president of Helion Technologies, a cybersecurity consulting firm focused on automotive and heavy-duty truck dealers. âThe attackers are using AI to create very targeted and very good-looking emails,â Nachbahr said. âGone are the days of misspellings and weird grammar.â AI also adds new risks for dealerships that use it to automate their back office or customer service tasks. âAll they see is all the great efficiency gains and how cool it is. All the risk is sort of hidden beneath that,â Evans said. Increasingly, cybersecurity companies are using AI tools to combat AI security breaches, Evans said. Dealership management systems, for example, may use an AI agent as a time saver to assess a customerâs financial records and make informed decisions about loan approvals and other financial tasks. A hacker could target that AI tool to falsely approve loans or to access customersâ private information, such as Social Security numbers, credit scores addresses and phone numbers, Evans said. Companies then design a second AI agent to manage data security for the first. âEven if that first AI gets hacked, the second AI steps in and says, âWhoa, stop what youâre doing. Somethingâs wrong,â Evans said. âThen you have a third AI that manages that one and so on.â
Human error also creates big cybersecurity risks as dealerships adopt more generative AI such as ChatGPT. Dealership employees paste customersâ personal data into ChatGPT and other programs to verify financial information. Many arenât doing so with cybersecurity policies or processes in place, Nachbahr said. Nachbahr warned dealerships to restrict many employeesâ use of AI tools to verify financial information without protections in place. Joe Shaker, owner of Shaker Auto Group in Watertown, Conn., agreed. Some sales managers and other employees use ChatGPT-style tools independently and feed the program customer records, creating a major cybersecurity risk, Shaker said. Bad actors can also hide instructions in customer messages, emails or VIN scans that manipulate the sales agent into giving up customer information, he said. Dealerships and their employees should think holistically and communicate openly about risks and protections that are in place, Sarid-Hausirer said. âFrom the OEMs to the suppliers, to the dealerships and anything that connects to the vehicle, all has to be orchestrated together,â she said.