Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
Document Type
Event
Start Date
22-4-2026 4:00 PM
Description
Construction floor plans contain rich architectural information, but they are not directly usable for robotic navigation in IoT-enabled smart buildings. They often require manual processing to remove irrelevant annotations and extract navigable layouts. Existing methods either depend on fixed image-processing pipelines that do not generalize well across different floor plan styles or on data-intensive learning models that require large annotated datasets. We propose a dual-memory multi-agent framework that treats floor plan-to-map conversion as a sequential, experience-driven decision process under data scarcity. The framework uses three cooperative agents for perception, decision-making, and evaluation, which interact through a shared persistent memory represented as a relational experience graph. Each memory entry stores a visual embedding that captures floor plan characteristics and a logical embedding that records processing decisions, outcomes, and failure patterns. This design enables similarity-based retrieval, reuse of successful configurations, and explicit avoidance of past failures. Experiments on benchmark construction floor plans, open-source residential floor plans, and real-world robot navigation show strong structural preservation, stable cross-domain generalization with frozen memory, and reliable physical deployment. Our system preserves navigation-critical topology while avoiding structural corruption, demonstrating data-efficient and transferable performance across diverse floor plan styles.
Included in
GRP-150-192 From Construction Floor-Plan to Robot-Ready Navigation Map: A Dual-Memory Multi-Agentic AI That Learns from Its Own Successes and Failures
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
Construction floor plans contain rich architectural information, but they are not directly usable for robotic navigation in IoT-enabled smart buildings. They often require manual processing to remove irrelevant annotations and extract navigable layouts. Existing methods either depend on fixed image-processing pipelines that do not generalize well across different floor plan styles or on data-intensive learning models that require large annotated datasets. We propose a dual-memory multi-agent framework that treats floor plan-to-map conversion as a sequential, experience-driven decision process under data scarcity. The framework uses three cooperative agents for perception, decision-making, and evaluation, which interact through a shared persistent memory represented as a relational experience graph. Each memory entry stores a visual embedding that captures floor plan characteristics and a logical embedding that records processing decisions, outcomes, and failure patterns. This design enables similarity-based retrieval, reuse of successful configurations, and explicit avoidance of past failures. Experiments on benchmark construction floor plans, open-source residential floor plans, and real-world robot navigation show strong structural preservation, stable cross-domain generalization with frozen memory, and reliable physical deployment. Our system preserves navigation-critical topology while avoiding structural corruption, demonstrating data-efficient and transferable performance across diverse floor plan styles.