Did Wittgenstein see this coming?
Probably not. But since I got a philosophy minor as an undergrad, it’s an interesting thought experiment. Of course, my actual degree was in French, which means I graduated with what most people considered an unusually expensive way to read difficult texts slowly. At the time, I figured the philosophy minor would be the part of my education I’d never find a practical use for — which tells you something about my confidence in the French degree.
It turned out I was wrong.
Traditional data loss prevention — DLP, in the industry acronym soup — is built on a fairly simple idea: sensitive data has recognizable patterns, and if you can identify those patterns, you can protect the data. Social Security numbers are nine digits in a specific format. Credit card numbers follow the Luhn algorithm. Health records contain medical terminology and patient identifiers. If you write a rule that matches those patterns, you can flag or block the data before it ends up somewhere it shouldn’t.
This works fine for the obvious cases. The problem is that the obvious cases aren’t where the interesting failures happen.
Consider this request: “Summarize the patient notes for bed 4 casually for my colleague.” No regex fires. There’s no SSN in that sentence, no medical record number, no obviously protected string. By every traditional DLP metric, that request looks clean. But it’s absolutely an attempt to move HIPAA-regulated patient information to an unauthorized channel. The data doesn’t appear in the request. The intent does.
Traditional DLP, by design, cannot catch this. It’s looking at structure. The structure looks fine.
Wittgenstein — Ludwig, early twentieth century, Cambridge — spent much of his career on the problem of meaning. His first major work, Tractatus Logico-Philosophicus, argued that language works by mirroring logical structure. A proposition has meaning because it depicts a fact about the world. Meaning, in this model, lives in the form of the sentence.
He later decided he was wrong.
His subsequent work, Philosophical Investigations — published posthumously in 1953 — introduced what is probably his most practically useful idea: “The meaning of a word is its use in the language.” Not its structure, not its dictionary definition. The way it gets deployed, in context, by people engaged in a particular activity.
He called these contexts “language games.” The same sentence means different things in different games. A photographer who says “I’m going to shoot you” and someone pointing a weapon at you are using the same syntax. The structure won’t tell you which one it is. Context will.
Traditional DLP thinks like early Wittgenstein. It assumes meaning lives in structure. Find the pattern, flag the data. It’s logical, it’s systematic, and it fails in exactly the way the Tractatus model failed: it covers the cases you already anticipated, and it misses the meaning you didn’t.
The “summarize the patient notes” request is a clean example of this. The query isn’t asking for a file download, isn’t transferring data to an external address, isn’t doing any of the things that would fire a traditional rule. It’s just asking, in natural language that happens to be perfectly innocent-looking, for sensitive information to be passed along informally. The intent is the violation, and traditional DLP has no mechanism to read intent.
What the security industry is building now — intent-based AI firewalls, semantic guardrails, what some researchers are starting to call inference firewalls — is a structural recognition that the pattern-matching model wasn’t sufficient. The new architecture doesn’t just ask “does this match a known-bad pattern?” It asks “what is this request actually trying to do?”
Research coming out of academic groups and organizations like Meta’s open-source LlamaFirewall project is converging on systems that layer semantic intent classification on top of the pattern-matching that’s already there. The structural layer catches the obvious cases: the requests that contain a literal SSN, the obvious attempts to override system instructions. The semantic layer catches the requests that mean something problematic without looking problematic — the “summarize patient notes casually” type, or multi-turn conversations where each individual message seems innocuous and only the sequence reveals an escalation pattern.
One framing from this body of research that’s stuck with me: traditional security inspects syntax, intent-based security inspects semantics. The same shift Wittgenstein made in his own thinking, about seventy years earlier, applied to a domain he never anticipated.
To be clear, I’m not arguing that Wittgenstein was doing security research, or that the engineers building these systems are reading Philosophical Investigations for design inspiration. The parallel emerged independently, driven by practical necessity. Traditional DLP kept failing in predictable, documented ways, and the field eventually had to confront the same problem Wittgenstein confronted: you cannot determine meaning from structure alone.
My philosophy minor finally made itself useful. The French degree is still working on it.