So, you have data from all the sources in the building (i.e., HVAC, Lighting, Occupancy, Weather). Congratulations, you’ve tackled the fragmented connectivity problem! Next up: the Context Problem.
Press enter or click to view image in full size
Never heard of the Context Problem? Let me illustrate, starting with a simple temperature reading of 72.3°F named RmTemp.
From the network metadata alone, we know:
- It’s an Analog Input measuring degrees Fahrenheit
- It likely represents a room
- It is hosted on a device named DEV101
- Its value remains in the range between 72 and 76 over 24 hours
From this, we can make educated guesses about RmTemp:
The evidence points to room air temperature: “RmTemp” suggests a room measurement, the Analog Input type and Fahrenheit units confirm temperature sensing, and the tight 72–76°F range indicates a conditioned space. Additionally, “101” in DEV101 could refer to a room number.
However, in order to extract real value from RmTemp, we need to understand the following:
How does it relate to the physical world?
At a minimum, we need to answer the following:
- What equipment (if any) uses RmTemp to operate?
- Where is DEV101 located in physical space?
Without these answers, even the best analytics become unreliable guesswork. And for control decisions? Forget about it.
Solving the Context Problem
Solving the context problem means bridging digital to physical. The answers may live in engineering drawings, historical work orders, or the minds of those with boots on the ground. The key is connecting the dots between disparate sources into a cohesive representation called a Unified Knowledge Graph.
Let’s trace our RmTemp example through this process. We start by confirming that DEV101 is indeed located in Room 101 — great, the naming convention holds up. Next, we discover that DEV101 communicates with VMA101, and our equipment schedule shows a VAV101 serving this room. Simple enough — VMA101 must be the VAV controller.
This feels like a clean win until we dig deeper. VMA101 has a Hot Water Valve command point, but VAV101 is listed as cooling-only. That’s when we find it… Room 101 also has a perimeter radiator. The controls contractor, being practical, used spare I/O points on VMA101 to control the radiator rather than installing a separate controller.
Suddenly our simple ‘one controller = one piece of equipment’ assumption crumbles. VMA101 isn’t just a VAV controller, it handles multiple equipment, managing both the VAV and the radiator. Without this physical reality, analytics or controls based on our original assumption would be fundamentally flawed.
But It’s Too Slow
The tried and true method is the Manual-Intensive Approach:
- Deploy domain experts to map most data points by hand to physical equipment, locations, and relationships.
- You’ll get pristine mappings, but expect months-long timelines and costs that make scaling impossible.
- It might be feasible for a handful of buildings, but it is not ideal in any situation.
The reality is that for a 100+ building portfolio, customers can’t wait years. That’s why many turn to the Automation-First Approach:
- AI is effective at scale. It can quickly identify patterns and keywords like “temperature,” but it struggles to understand the relationships between points.
- In simple cases, automation can run on its own. For example, if working only with a single OEM and known controller models, mappings could be prepopulated. But controllers are often used differently in practice. Prepopulation only captures generic details, so important context like device location, or the actual equipment it controls may be missing. This makes the approach fast and accurate, but only for a limited set of applications.
- In most real-world deployments, expert oversight covers the last mile of mapping. It ensures data reflects unique conventions, equipment, and customer requirements.
Solving it at Scale
At Mapped, we’ve onboarded thousands of buildings across Fortune 100 portfolios. Offices, airports, hospitals, stadiums, data centers. Every project reinforced what we knew from day one: the context problem can only be solved through efficient human–AI interaction.
AI alone will never be perfect, and domain experts are best positioned to fill the gaps. That is why we focused early on defining where human input is needed, designing the optimal process, and developing the right AI models to optimize this relationship.
Here’s how it works:
- The AI engine generates an initial model, covering about 40–80% of the building.
- A domain expert reviews and maps the 0.5–2% of points where confidence is low, a process that takes only a few hours.
- AI then applies these patterns across the rest of the building, labeling tens of thousands of points in hours.
- Validate the results through expert review and ontology rules.
- With each building, the system gets smarter and requires less manual mapping, accelerating every subsequent deployment.
Building 1 may require several days to establish the initial patterns and collect building-specific information. However, this is where it get’s interesting:
The deep expertise gained from that first building transfers to related building systems through our intelligently scoped AI. Buildings 2, 3, and beyond leverage this accumulated localized knowledge, reducing onboarding time dramatically within a portfolio. This creates a powerful flywheel effect. Experts spend less time on repetitive mapping and more time tackling edge cases. The AI learns from each interaction. Quality improves while deployment speed increases. Scale becomes an advantage.
Shaping Data to Fit Each Solution and Customer
Building data classification isn’t just about accuracy. It’s about judgment. Take our RmTemp of 72.3°F: Does this data point belong to the thermostat, the conference room, the VMA controller, or the VAV unit itself?
The answer depends on how you plan to use it. Some organizations prefer to abstract away controllers and roll data points directly to the equipment or spaces. Others need visibility into every device. There’s no universally “correct” choice, only what works for your specific use case.
This is where most automation-first approaches break down. They optimize for consistency, not usability. Our AI systems learn not just what the data means, but how our partners actually need to use it. Domain experts don’t just correct labeling errors. They shape the output to match real-world requirements.
The result is Human-in-the-Loop intelligence that gets smarter with every expert interaction, learning both the technical relationships and the practical preferences that make data truly actionable.
Putting Teams in the Driver’s Seat
After onboarding thousands of buildings, we continued to refine our process. Customers told us they wanted greater visibility into their data and more control over how it is managed.
In response, we introduced a new interface and smarter algorithms that let teams handle the last mile of context directly, enhancing flexibility while maintaining efficiency.
Conclusion
The companies that will dominate smart buildings aren’t those trying to eliminate human expertise or those trapped in manual processes. They’re the ones who’ve cracked the code on making domain knowledge and machine intelligence complementary.
The Future of Context
We’re not keeping this breakthrough to ourselves. Soon, the same internal system that powers Mapped deployments across Fortune 100 portfolios will be available to you. Expert Center is a self-serve tool designed to combine human precision with AI speed, allowing you to tackle the context problem at scale.
This is just the beginning. Follow along for more insights and updates on the release — here on our blog or on LinkedIn — and be the first to know when Expert Center goes live.