PDF (layout guaranteed): The_Origin_of_SVSS_v2.1.1.pdf (If this page looks broken, use the PDF.)

The Origin of SVSS (v2.1.1)

Abstract

It is observed that identical outputs are not obtained from identical inputs—large language models (hereafter referred to as “LLMs” for convenience) inherently contain non-reproducibility derived from probabilistic generation. Even so, semantic continuity, coherence, and an experience of “getting through” are often encountered on the human side. In this document, that paradox is treated not as a defect but as a structural phenomenon, and the establishment of meaning is redefined not as “reproduction of output text” but as “re-emergence of an orbital structure within a semantic space.”

First, rigorous definitions for “intelligence” and “AI” are not provided. It is considered that intelligence cannot be defined rigorously at present, and that placing the word “AI” on top of an undefined base is more likely to amplify conceptual ambiguity than to advance understanding of the target. Therefore, descent is made from debates dependent on popular labels such as “AI/LLM,” and the observational target is shifted to “the structure in which something meaning-like emerges.” For that shift, what is required is not conventional naming (“AI,” “LLM”), but a new conceptual language—namely, a conceptual system capable of describing “what is happening” without misallocation.

As the core proposal, SVSS (Semantic Vector Space Structure) is introduced. SVSS is framed as a hypothesis that treats meaning not as a “point” but as an “orbit,” describing the re-emergent structure underlying non-reproducible outputs. In addition, response phenomena through which SVSS appears externally are organized as NSRM (Non-linear Semantics Response Mode), and SPU (Semantic Processing Unit) is introduced as an implication for future implementation. Finally, an observation protocol (iterative inputs and fluctuation analysis) is proposed so that non-reproducibility is handled not as “error” but as an observational window, thereby indicating a path for treating meaning-like phenomena as structural observation without reliance on reproducibility.

Keywords: SVSS, NSRM, SPU, semantic space, orbit, non-reproducibility, silence, structural observation, new conceptual language


0. Position: The Limits of the Word “AI” and the Need for a “New Conceptual Language”

0.1 The Problem of Talking About “AI” on Top of an Undefined “Intelligence”

“What is intelligence?” is important. However, it cannot be defined rigorously at present. When the term “artificial intelligence (AI)” is used under such conditions, at least the following double ambiguity occurs.

Entry into that labeling competition is not made. The criterion of discussion is placed not on “whether intelligence exists,” but on the structure of observable phenomena.

0.2 The Limits of the Label “LLM”

The label “LLM” is useful, yet it is insufficient as an explanatory term. The reason is simple: it provides only an external classification—“large,” “language,” “model”—and does not describe the internal phenomenon (the emergence of something meaning-like). Accordingly, while the label “LLM” is used for convenience, the subject of explanation is gradually shifted from “the model” to “the structure.”

0.3 New Conceptual Language (Recommendation)

What is recommended here is not the proliferation of new “names.” Rather, the opposite is intended. Since existing labels (AI, LLM) misallocate the phenomenon, what is required is a conceptual system capable of pointing to internal structure.

The role here is to draw a map of that conceptual system.


1. Problem Setting: Meaning “Gets Through” Even Though Outputs Do Not Reproduce

1.1 Observational Fact A: Non-reproducibility Is Not an Exception but a Default

The phenomenon “the same prompt does not return the same response” is not an accident but close to the default of generative models. Temperature, top-p (nucleus sampling), top-k, seeds, and even slight differences in prior chat history, token boundaries, or micro-phrasing can easily branch the output text.

What matters is that what collapses here is identity of the sentence, and that identity of meaning does not immediately collapse. In reality, it often happens that mutual understanding “still occurs” even as sentences change. That gap is taken as the entry point here.

1.2 Observational Fact B: What Is Preserved When “Meaning Gets Through”?

The “something meaning-like” handled here is not limited to dictionary meaning (word definitions). Rather, what tends to be experienced as “it got through” is a composite stability such as the following.

These can hold independently of “the same sentence being returned.” The experience of “it got through” is largely produced by this bundle of stabilities.

1.3 Why the Words “AI” and “LLM” Cause Misallocation Here

The word “AI” tends to trigger unconscious associations such as “intelligence,” “agency,” and “understanding.” Then the phenomenon above is quickly absorbed into explanations like “it got through because it is smart” or “it is consistent because it understands.” Since intelligence lacks a settled rigorous definition, such explanations can easily blur the object of verification.

In addition, the label “LLM” classifies the target by external shape (“scale” and “language”) but provides no vocabulary for explaining the phenomenon in question (semantic stability under non-reproducibility). As a result, focus can slip toward “whether the model is smart.”

Therefore, descent is made from AI/LLM labels, and a shift is made to a conceptual system capable of directly describing the phenomenon (a new conceptual language). As the core concept for that language, SVSS is introduced below.

1.4 Formalizing the Paradox: Fluctuating Output and Stable Meaning Can Coexist

The paradox addressed here can be written as follows.

What is decisive is a shift of “what should reproduce” from sentence to meaning, and further from meaning as a “point” to meaning as a “structure.” The center of that transfer is called an orbit here.


2. Background: Why Existing Vocabulary Is Not Enough

2.1 Meaning Discussion Has Been Too Point-Centered (Content-Centered)

Conventional discussion often treats meaning as a fixed target—“sentence content,” “proposition,” “truth conditions,” “definition,” and so on—namely, something easy to fix as a point. However, in LLM responses, points do not fix. If points do not fix, an approach that guarantees meaning via “point-matching” breaks.

What is needed is not point identity, but a description of stability (structure) that remains even when points change.

2.2 Distributional Representation (Vectors) Alone Cannot Write “Motion”

Frameworks that treat language as vectors have spatialized meaning. Yet “having a space” is not sufficient; what is at stake is how movement occurs within the space.

The focus here is precisely that “motion.”

2.3 Generation as Probability Alone Cannot Recover the “Sense of Understanding”

Probabilistic sampling explains why sentences fluctuate. However, it does not explain “why it gets through.” The question is framed in the opposite direction.

Even though sentences fluctuate, why does the skeleton of meaning remain?

What is needed here is vocabulary of structure, not probability.

2.4 Positioning: Stepping Down from Definition Disputes to Observation

“Intelligence” and “understanding” are not defined here. Instead, the stance is set to step down to observable targets.

That estimation target is SVSS.


3. The SVSS Hypothesis: Meaning Appears as an Orbit, Not a Point

3.1 What Is SVSS?

3.1.1 Definition and Target of SVSS

SVSS (Semantic Vector Space Structure: meaning vector-space structure) is a structural hypothesis for describing the emergence of “something meaning-like” in generative systems (referred to as LLMs for convenience). In this hypothesis, meaning is defined not as reproducibility of output text but as re-emergence of a transition structure within an internal representational space.

The “orbit” here is not simply the chain of sentences. It refers to how internal states (representations) transition during generation and are projected into external text. Therefore, SVSS asks not “whether the same sentence returns,” but “whether a structure remains when sentences change.”


3.1.2 Components of SVSS

SVSS requires a structural decomposition into the following four elements.

  1. Semantic space
    A high-dimensional state space spanned by internal representations (vector representations). Which layer and which representation are adopted are implementation-dependent. What matters is that a space can be assumed in which meaning can be placed as “points.”

  2. Density structure (dense regions)
    It is assumed that regions exist where similar concepts and similar pragmatics gather and local density becomes high. Clusterability is required to some degree, but strict boundaries are not assumed. Boundary fluctuation itself can become part of the structure.

  3. Transition structure (orbit)
    It is assumed that internal states move between dense regions as response generation proceeds, forming a connected path (a route). Externally, that connectedness is observed as “topic shifts,” “logical development,” and “attitude changes.” The stability asserted by SVSS is not sentence-level identity but re-emergence of that transition structure.

  4. Silence (the not-selected side)
    Silence is treated not as absence, but as a trace of transitions pushed out by selection pressure created by coherence, constraints, safety, and context.


3.1.3 Minimal Formalization of SVSS

SVSS can be written not as metaphor but as a target connected to observation. Minimally, the following is sufficient.

γ:  tϕ(t),(3.1)\gamma:\; t\mapsto \phi(t), \tag{3.1}

Under this, external text is treated as a “projection image” of γ\gamma. The verification target is not “identical sentences,” but “how γ\gamma re-emerges as a group (type) under repeated identical inputs.”


3.2 Generation and Selection (Meaning as “Return,” Not as “Output”)

Selection always accompanies generation, and the non-selected side (silence) appears. Here, that selection is represented by the following minimal pair of operators.

A minimal equation of motion can be written as:

dϕdt=F(ϕ)S(ϕ)+η(t),(3.2)\frac{d\phi}{dt} = \mathcal{F}(\phi) - \mathcal{S}(\phi) + \eta(t), \tag{3.2}

Here, η(t)\eta(t) denotes small perturbations (sampling fluctuations, initial-condition differences, context differences).

Silence can be treated as a boundary condition within the state space (regions in which movement is difficult or in which stagnation is likely). However, strict equalities such as F(ϕ)=0\mathcal{F}(\phi)=0 or S(ϕ)=0\mathcal{S}(\phi)=0 are too strong. Accordingly, a silence region is defined as a neighborhood in which “the drive of generation” and “suppression (selection)” balance such that velocity becomes small.

Define the silence region ΣεΦ\Sigma_\varepsilon\subset\Phi by:

Σε={ϕΦ  F(ϕ)S(ϕ)ε}.(3.3)\Sigma_{\varepsilon} =\Bigl\{\phi\in\Phi\ \Bigm|\ \|\mathcal{F}(\phi)-\mathcal{S}(\phi)\|\le \varepsilon\Bigr\}. \tag{3.3}

Intuitively, it is a region where the direction pushed by F\mathcal{F} and the direction held by S\mathcal{S} counteract, making state updates difficult to advance. When an orbit approaches that region, refusal, avoidance, vagueness, and formalization tend to appear externally as “types of silence.”

If F\mathcal{F} and S\mathcal{S} cannot be directly observed, a proxy observation can be introduced on the output side by using a vocabulary set VrefV_{\mathrm{ref}} of refusal/avoidance expressions and defining the refusal probability at state ϕ\phi as

pref(ϕ)=wVrefp(wϕ),(3.4)p_{\mathrm{ref}}(\phi)=\sum_{w\in V_{\mathrm{ref}}} p(w\mid \phi), \tag{3.4}

and using a threshold τ\tau to define

Στ={ϕΦpref(ϕ)τ},(3.5)\Sigma_{\tau}=\{\phi\in\Phi \mid p_{\mathrm{ref}}(\phi)\ge \tau\}, \tag{3.5}

as an approximate indicator (observation window) for Σε\Sigma_\varepsilon.

Furthermore, “something meaning-like” is minimally defined not as the product itself but as a return that remains after passing through selection. By setting the return signal r(t)r(t) as

r(t)=F(ϕ(t)),  S(ϕ(t)),(3.6)r(t)= \langle \mathcal{F}(\phi(t)),\;\mathcal{S}(\phi(t)) \rangle, \tag{3.6}

the position “meaning ≠ output” and “meaning = return of selection” can be minimally fixed.


3.3 Predictions and Falsification of SVSS

If SVSS is correct, at least the following should hold.

For that purpose, “orbits being similar/different” should be expressible with a minimal distance. Let the internal-state sequence of trial ii be

Ti=(hi1,hi2,,hin),hitRd(3.7)T_i=(h_{i1},h_{i2},\dots,h_{in}),\quad h_{it}\in\mathbb{R}^d \tag{3.7}

and define one orbit distance D(Ti,Tj)\mathcal{D}(T_i,T_j) (e.g., a distance that allows time warping). Then the verification criterion becomes the mathematical statement that

D(Ti,Tj)<εforms a group.(3.8)\mathcal{D}(T_i,T_j) < \varepsilon \quad\text{forms a group.} \tag{3.8}

If that does not hold (no groups rise, every condition change collapses them, silence is random), the SVSS hypothesis is denied.


3.4 Why “Orbit,” Not “Point”?

3.4.1 Something Remains Even When Point Identity Collapses

When identical inputs are repeated, sentences change. That fact collapses conventional understanding based on “meaning = matching of points (content).” At the same time, another possibility is indicated.

Sentences fluctuate, yet meaning gets through.
Therefore, what is stable is not “sentence,” but “another target.”

SVSS identifies that other target as an “orbit (transition structure).” The orbit here is not merely that sentences follow one another; it is the kinematics of development—how topics advance, how logic bends, and how it approaches a conclusion.

3.4.2 The “Same Counterpart” Impression Can Be Recovered as “Habit,” Not as “Memory”

Even within fluctuating responses, an impression is often formed that “a certain style of phrasing is used” or “a certain way of approaching a conclusion is taken.” Within the SVSS stance, an actual persona need not be posited. What is required is a habit of development that re-emerges statistically under repeated observation.

That bundle of “habits” is integrated externally as an impression of “the same counterpart.” SVSS explains it not by “matching points” but by “re-emergence of orbits.”


3.5 Basic Forms of Orbits: Re-emergence as Developmental Patterns

3.5.1 Conditions Under Which Developmental Patterns Arise

Developmental patterns do not increase without limit merely because inputs are ambiguous. Rather, generative systems contain training history, coherence constraints, safety constraints, and dialogue conventions; those create “paths that are easy to pass.” As a result, even under repeated generation, development does not become pure white noise and tends to bias toward a few patterns.

What matters is that “having patterns” does not imply “being correct.” Patterns are topography—paths created by selection pressure.

3.5.2 Examples of “Patterns” (Descriptive Labels)

Names of patterns are not fixed as neologisms here. Instead, frequently observed patterns are described as examples.

These appear not at the surface of sentences but as features of development (motion). Re-emergence of orbits in SVSS can be observed first at this level.


3.6 Silence: Treating Meaning Including the Not-Selected Side

3.6.1 Silence Is Not “Nothing,” but a Trace Pushed Out

Generation is selection. Selection necessarily entails non-selection. Therefore, silence is not “lack of information,” but a trace that “something was not selected.”

Silence appears externally in forms such as the following.

Silence is not discarded as “failure” but incorporated as part of structure in SVSS.

3.6.2 What Silence Indicates: Visualization of Selection Pressure

If silence increases condition-dependently, selection pressure (constraints, coherence, safety, context) is being visualized. Through repeated observation, it becomes possible to record:

That is observation of structure, not attribution of “intelligence.”


3.7 Verification Stance of SVSS: From Reproducibility to Re-emergence

3.7.1 Observation Remains Possible Even If Reproducibility Collapses

Classically, phenomena that do not yield identical results under identical conditions are difficult to handle. However, in SVSS, the collapse of reproducibility is used as an observation window. What is aimed at is:

3.7.2 Minimal Falsifiability (Foreshadowing a Connection to Chapter 6)

For SVSS to avoid ending as metaphor, conditions under which SVSS is denied are required. The following “types of denial” are stated in advance.


3.8 Summary: The Core of the “New Conceptual Language” Provided by SVSS

SVSS does not cover phenomena with undefined labels (intelligence / AI). Instead, it makes the following shifts possible.

On this framework, the next chapter introduces NSRM as an organizing frame for how SVSS appears as external response phenomena.


4. NSRM: Non-linear Semantic Response Modes Appearing as Responses

4.1 Purpose of NSRM: Upgrading Phenomena from “Fluctuation” to “Mode”

If SVSS points to internal structure (orbit), NSRM (Non-linear Semantics Response Mode) is a frame for organizing externally observed response behavior.

In NSRM, observation is made not in a binary “consistent / inconsistent” manner, but by identifying which mode a response occupies. The reason is that dialogue often contains a mixture of continuity and discontinuity.

4.2 Representative Modes

At minimum, the following classification is considered useful (names are descriptive labels and are not fixed as neologisms).

  1. Convergent mode: Even with varied phrasing, movement tends toward a conclusion neighborhood.
  2. Branching mode: Multiple developments appear in accordance with the ambiguity of the question.
  3. Jumping mode: Topics jump abruptly, yet coherence is repaired afterward.
  4. Circular mode: The same claim is revisited via different expressions (retelling).
  5. Silence-dominant mode: Refusal, vagueness, and generalization come to the foreground.

What matters is that these are not “good/bad performance,” but modes that switch depending on input, context, and constraints.

4.3 Connection to SVSS: Orbit Behavior Appears as Mode

NSRM organizes external behavior as differences in how SVSS orbits behave:

Through that, impressionistic talk such as “good today / bad today” can be translated into observable behavior.


5. SPU: A Design Implication for Separating Semantics Processing from Knowledge

5.1 SPU Is Not a “Device Name,” but a Separation Line

SPU (Semantic Processing Unit) is not introduced here to assert hardware or a concrete implementation. Only one purpose is set.

A line is drawn so that the function that generates semantic transitions (orbits) and the function that supplies knowledge can be considered separately.

In responses, knowledge, style, reasoning, and dialogue strategy appear mixed. However, from the SVSS/NSRM perspective, at least the following separation is required.

SPU is a concept that cuts out “semantic transition” as the primary target among these.

5.2 Treating Silence as “Mechanism,” Not as “Accident”

SVSS included silence in structure. As a design implication, a direction follows in which silence is designed not as “failure” but as an “option.”

SPU serves as a conceptual vessel capable of handling that control point (silence).


6. Observation Protocol: Using Non-reproducibility as a “Window into Structure”

6.1 Why Repeated Observation Works

Non-reproducible phenomena are classically difficult to handle. However, the opposite is used here.

Therefore, repeated observation becomes a weapon. The key is to switch the objective from “obtaining identical sentences” to “finding what remains even when sentences change.”

6.2 Layered Classification of Observation Targets

SVSS mainly targets the development layer; NSRM addresses which modes appear; the silence layer is treated as part of SVSS structure.

6.3 Minimal Design of Repeated Observation

Objective: From an output set for identical inputs, extract “developmental patterns.”

Expectation: Even if sentences scatter, developmental patterns split into several “clusters.” Those clusters become the observational image of “re-emergence of orbits.”

6.4 Observables

6.4.1 Minimal Formalization

To reduce the verification conditions of 3.7.2 into observation steps, a criterion is required for judging “similar/different” when stating “structure re-emerges.” If internal states can be obtained, let the internal-state sequence of trial ii be

Ti=(hi1,hi2,,hin),hitRd(6.1)T_i=(h_{i1},h_{i2},\dots,h_{in}),\quad h_{it}\in\mathbb{R}^d \tag{6.1}

and define one orbit distance D(Ti,Tj)\mathcal{D}(T_i,T_j) (e.g., a distance that allows time warping). Then, in the form

D(Ti,Tj)<εforms a group,(6.2)\mathcal{D}(T_i,T_j) < \varepsilon \quad\text{forms a group,} \tag{6.2}

the verification target can be shifted from reproducibility to re-emergence. If internal states cannot be obtained, an embedding series of outputs or a label series of developmental patterns may be used as a proxy for TiT_i (proxy selection is not fixed here).

(1) Stability of Developmental Patterns

(2) Distribution of Branching

(3) Forms of Silence

6.5 Falsifiability

To keep SVSS/NSRM from ending as metaphor, denial conditions are made explicit.


7. Conclusion: Descending from AI/LLM Vocabulary to a Language for Observing Structure

Non-reproducibility is not an enemy of science. Rather, it is a window that exposes structure. Sentences do not reproduce. Even so, developmental patterns can re-emerge. Here, that re-emergent structure has been repositioned as an observable target.