Imperceptible Attacks in LLM-aided NFT Transactions via Blockchain Semantic Poisoning

Disciplines

Computer Engineering

Abstract (300 words maximum)

The management of Non-Fungible Tokens (NFTs) depends on public blockchain metadata, event logs, and transaction histories to ensure verifiable ownership and authenticity. While smart contract code typically undergoes extensive auditing, the surrounding on-chain data layer remains an underexplored attack surface. To address this gap, we present an experimental research platform built on a private Ethereum-compatible network. The platform implements a suite of 3D asset management contracts that simulate realistic NFT lifecycles integrated with Generative AI (GAI) models. These artifacts enable systematic investigation of semantic poisoning attack scenarios in which, even without altering contract code, trusted on-chain data can be manipulated to trigger unauthorized transfers, misattribute ownership, or corrupt asset provenance—particularly through the exploitation of Large Language Models (LLMs). We further discuss how LLM-assisted attacks could compromise NFTs encapsulating GAI-generated 3D assets by substituting, corrupting, or misdirecting asset pointers and metadata. By offering a testbed and realistic 3D asset workflows, our platform substantially advances research on semantic poisoning attacks in blockchain ecosystems.

Use of AI Disclaimer

no

Academic department under which the project should be listed

CCSE – Software Engineering and Game Development

Primary Investigator (PI) Name

Chenyu Wang

This document is currently not available here.

Share

COinS
 

Imperceptible Attacks in LLM-aided NFT Transactions via Blockchain Semantic Poisoning

The management of Non-Fungible Tokens (NFTs) depends on public blockchain metadata, event logs, and transaction histories to ensure verifiable ownership and authenticity. While smart contract code typically undergoes extensive auditing, the surrounding on-chain data layer remains an underexplored attack surface. To address this gap, we present an experimental research platform built on a private Ethereum-compatible network. The platform implements a suite of 3D asset management contracts that simulate realistic NFT lifecycles integrated with Generative AI (GAI) models. These artifacts enable systematic investigation of semantic poisoning attack scenarios in which, even without altering contract code, trusted on-chain data can be manipulated to trigger unauthorized transfers, misattribute ownership, or corrupt asset provenance—particularly through the exploitation of Large Language Models (LLMs). We further discuss how LLM-assisted attacks could compromise NFTs encapsulating GAI-generated 3D assets by substituting, corrupting, or misdirecting asset pointers and metadata. By offering a testbed and realistic 3D asset workflows, our platform substantially advances research on semantic poisoning attacks in blockchain ecosystems.