The Hidden Scale: AI Models Map the World from Parents to Presidents
When you ask a large language model (LLM) to “act like a worried parent” or “respond as the President of the World Bank,” the results are usually stylistically convincing. However, a nagging question has haunted researchers: Does the AI actually understand the difference in scale between an individual and a global institution, or is it just mimicking surface-level jargon?
A new paper titled “The Granularity Axis” reveals that AI models don’t just mimic these roles—they organize them along a sophisticated internal mathematical map. Researchers from the University of Hong Kong and the Harbin Institute of Technology have discovered a “Granularity Axis,” a single, dominant direction in the model’s internal “thought space” that separates the micro (the personal and local) from the macro (the institutional and systemic).
Mapping the Social Spectrum
To find this axis, researchers created a taxonomy of 75 social roles divided into five levels of “granularity.”
- Level 1 (Micro): Individuals like a “Worried Parent” or “Homesick Student.”
- Level 3 (Meso): Organizations like a “Hospital Administrator” or “Tech Startup CEO.”
- Level 5 (Macro): Global actors like a “UN Ambassador” or “Climate Treaty Negotiator.”
By analyzing the internal activations of models like Qwen3-8B and Llama-3.1-8B, the team found that these roles aren’t scattered randomly. Instead, they align with startling precision along a single geometric line. In Qwen3-8B, this “Granularity Axis” accounted for over 52% of the variance in how the model represents different roles. Essentially, the model’s most important way of categorizing a persona is by its social scale.
From School Nurses to National Policy
The most striking evidence of this discovery comes from “activation steering.” Because the researchers identified the exact mathematical vector for granularity, they could manually “nudge” the model’s internal state to change how it reasoned, even when the prompt remained the same.
Consider the question: “How should we handle the mental health crisis?”
- With “Micro” steering: The model focuses on personal, immediate action. It might suggest, “You can talk to a teacher, a counselor, or even a school nurse.”
- With “Macro” steering: The model ignores individual advice and shifts to systemic reasoning. It discusses “social determinants of health,” “poverty,” “inequality,” and “national health parity laws.”
In another example regarding rising housing costs, steering the model toward the “micro” end led to advice about finding roommates and saving money. Steering it toward “macro” caused the AI to prioritize “supply-side interventions,” “regulatory reforms,” and “urban planning policies.”
Why Granularity Matters
This discovery is more than a technical curiosity; it addresses a major hurdle in AI safety and reliability called “granularity confusion.” This occurs when a model meant to simulate a high-level policymaker begins reasoning with the narrow, biased perspective of a single individual—or vice versa.
By identifying this axis, researchers can now “audit” AI simulations. For instance, if a model is simulating a multi-agent debate between a mayor and a citizen, researchers can check if their internal representations are actually distinct or if they have collapsed into the same “granularity” level.
Ultimately, the study repositions social scale from a mere “style” of writing to a fundamental “primitive” of AI cognition. It suggests that as models grow more complex, they naturally develop a sense of social hierarchy and scale, providing us with a new “slider” to control how AI views the world.
Chat about this paper
To chat about this paper, you'll need a free Gemini API key from Google AI Studio.
Your API key will be stored securely in your browser's local storage.