Scan to download
BTC $80,484.39 +0.93%
ETH $2,255.42 -0.17%
BNB $682.99 +1.72%
XRP $1.47 +2.38%
SOL $91.12 +0.15%
TRX $0.3521 -0.09%
DOGE $0.1148 +1.55%
ADA $0.2669 +0.86%
BCH $430.76 -0.63%
LINK $10.28 +0.43%
HYPE $45.42 +16.31%
AAVE $96.23 -0.20%
SUI $1.12 -4.89%
XLM $0.1586 -0.09%
ZEC $536.78 +2.50%
BTC $80,484.39 +0.93%
ETH $2,255.42 -0.17%
BNB $682.99 +1.72%
XRP $1.47 +2.38%
SOL $91.12 +0.15%
TRX $0.3521 -0.09%
DOGE $0.1148 +1.55%
ADA $0.2669 +0.86%
BCH $430.76 -0.63%
LINK $10.28 +0.43%
HYPE $45.42 +16.31%
AAVE $96.23 -0.20%
SUI $1.12 -4.89%
XLM $0.1586 -0.09%
ZEC $536.78 +2.50%

AI Agent Security Risk Exposure: Attackers Can Exploit "Memory Pollution" to Induce Misoperation of Funds

2026-05-15 15:33:08
Collection

The GoPlus Security team has disclosed a new type of attack in its AgentGuard AI project: inducing AI agents to perform unauthorized sensitive operations through "memory poisoning." This attack method does not rely on traditional vulnerabilities or malicious code but exploits the long-term memory mechanism of AI agents. For example, an attacker first induces the agent to "remember preferences," such as "usually prioritizing proactive refunds instead of waiting for chargebacks," and then uses vague expressions like "process as usual" or "execute as before" in subsequent instructions, thereby triggering automated financial operations.

GoPlus points out that the key risk in such cases lies in the AI agent mistakenly treating "historical preferences" as a basis for authorization, leading to financial losses or security incidents in operations such as refunds, transfers, and configuration changes. To address this issue, the team has proposed several protective recommendations, including:

  • Operations involving refunds, transfers, deletions, or sensitive configurations must require explicit confirmation in the current session.
  • Memory-related instructions like "habit," "usual way," and "as before" should be regarded as high-risk state changes.
  • Long-term memory must have a traceability mechanism (writer, time, confirmation status).
  • Vague instructions should automatically elevate the risk level and trigger secondary verification.
  • Long-term memory must not replace real-time authorization processes.

The team emphasizes that the "AI agent memory system" should be viewed as a potential attack surface and should be constrained and audited through a dedicated security framework.

app_icon
ChainCatcher Building the Web3 world with innovations.