▶주메뉴 바로가기

▶본문 바로가기

The Korea Herald
검색폼

THE INVESTOR
July 04, 2024

Industrials

Cyber attacks using generative AI on rise: SK Shieldus

  • PUBLISHED :July 02, 2024 - 16:41
  • UPDATED :July 02, 2024 - 16:41
  • 폰트작게
  • 폰트크게
  • facebook
  • sms
  • print

Lee Jae-woo, group leader of EQST business at SK Shieldus speaks at a media seminar in Seoul on Tuesday. (SK Shieldus)

Cyberattacks using generative artificial intelligence, or targeting the weak points of the new technology are on the rise, SK Shieldus, a leading security service company in Korea said on Tuesday.

In a media seminar, SK Shieldus and EQST, the company's white-hat hackers group, presented its findings on the rising hacking trends using AI and underscored the importance of raising awareness and taking preventive measures to reduce damage.

“Large Language Models are being used actively to assist hackers in their attack attempts, and such cyberattacks are becoming more elaborate and advanced," Lee Jae-woo, the group leader of EQST business arm said.

According to Lee, deepfake phishing and virtual asset theft were the hot topics in the global security sector in the first half of the year.

In Korea, most cyber infringement cases occurred in the finance industry, taking 20.6 percent of the total cases in the first half of this year, according to SK Shieldus statistics. IT and communication industry followed in the list with 18 percent and the manufacturing sector took 16.4 percent.

Globally, the pubic and administration sectors suffered the most cases of cyberattacks, taking 26.7 percent, while the IT and communication sectors followed with 22.4 percent.

On the AI fraud and hacking cases using generative AI and chatbots, EQST picked out three critical vulnerabilities often occurring among the top 10 picked out by OWASP, a nonprofit charity.

"The generative AI chatbots can be misused to create malware codes, or to access disclosed information such as drug manufacturing methods or personal information," Lee explained.

Prompt injection, one of the three, occurs when an attacker manipulates a large language model through crafted inputs, causing the LLM to unknowingly execute the attacker’s intentions. In other words, an attacker may continue to ask questions to draw answers from the chatbots that were initially prevented from access.

Insecure output handling is also one that refers specifically to insufficient validation and handling of the outputs generated by LLMs before they are passed downstream to other components and systems -- offering users indirect access to additional, unintended functionality.

Sensitive information disclosure is also a serious problem, where sensitive information, proprietary algorithms, or other confidential details are revealed through generative AI.

"Now companies are developing and adopting small language models tailored for their own needs, to only train the information necessary for their work," Lee Ho-seok, the head leading EQST Lab team, said.

To prevent such attacks, Lee suggested companies adopt prompt security and data filtering solutions.

By Jo He-rim (herim@heraldcorp.com)

EDITOR'S PICKS