The emergence of AI Hackers pose more threat to cyber security

Summary

New research from Microsoft reveals that hackers are now incorporating artificial intelligence and encryption to create more sophisticated and harmful cyber-attacks. Tom Burt, Microsoft’s corporate vice president for customer security and trust, reports that hackers are utilizing AI tools and generative AI chatbots to create stealthier attacks. These AI-powered tools allow cybercriminals and nation-states to refine the language and imagery they use in phishing attacks and influence operations. Additionally, Microsoft has observed an emerging development in ransomware where hackers remotely encrypt data, making it harder for companies to recover as less evidence is left behind. This technique was used in around 60% of human-operated ransomware attacks observed by Microsoft in the last year.

Content

Hackers are using new tools to steal data and attack companies, and it’s becoming harder for these companies to protect themselves. According to Microsoft researchers, the number of attacks where hackers steal data and demand money has doubled between November 2022 and June 2023. Additionally, ransomware attacks have increased by 200% between September 2022 and June 2023. These attacks are customized and can be operated by humans, and hackers are now demanding money in return for not releasing stolen data. This is a growing trend, as more and more companies have become better at recovering from ransomware damage alone.

Hackers are shifting towards stealing data and demanding ransom in exchange for not leaking it to make money. According to Jake Williams, faculty at IANS Research and a former offensive hacker at the National Security Agency, more threat actors are moving towards extortion. He stated, “We definitely are seeing more threat actors moving toward extortion.”

Tech and cyber companies are increasingly utilizing AI as their security tools to counter cyber attacks. Investment is now going to companies that use AI to manage security and risk, as seen in Cisco Systems’ purchase of Splunk. However, cyber security and national security officials have warned about the risks of hackers using AI tools to infiltrate systems and emphasized the need for the government to develop AI technologies to counter such attacks. Tech executives recently met with U.S. senators to discuss AI regulation.

According to Lukasz Olejnik, a cyber security researcher and consultant, hackers are utilizing large language models similar to those found in generative AI tools to accelerate certain aspects of cyber-attacks such as writing phishing emails and creating malware. These models require vast amounts of data to be trained, but can now be operated by a single individual, replacing the need for a team. Diego Souza, chief information security officer at manufacturer Cummins, has observed a significant increase in authentic-looking phishing emails since the release of generative tools such as OpenAI’s ChatGPT last year. These emails now realistically replicate actual companies and people, with more convincing language. Underground phishing services can be subscribed to by cybercriminals for $200 to $1,000 per month, according to Microsoft. 

Burt, a security expert, believes that hackers will keep finding new ways to use technology to do their work. They might start using artificial intelligence to make their attacks even more effective. For now, though, the most common ways hackers break into computer systems are still phishing, password spraying, and brute-force attacks. Companies need to be aware of these risks and take steps to protect themselves. The goal of hackers is to find the most cost-effective way to infiltrate their target.

Picture and Article Source: WSJ

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »