My research rests on a single claim: intelligence — in biological cognition, in neural networks, and in the systems we build — is better understood as accidental discovery in finite structure than as optimal search in infinite space. I work on the computational theory this claim implies, and on runnable systems that test its corollaries.
Theoretical base
The theoretical base began with an observation: single-scalar models of memory conflate two logically independent quantities — how much of a trace exists, and how faithfully it preserves its original encoding. Separating these (structural accumulation vs. representational fidelity) and coupling them through a small system of differential equations resolves a set of phenomena that unitary-strength models cannot: why extinction is temporary but retrieval-extinction produces lasting change; why stress-enhanced fear memories persist orders of magnitude longer than ordinary ones; why the spacing effect has interpretable boundary conditions. The resulting framework — Structural Crystallization — is currently under review at Psychological Review.
From this base I've been pushing the same structural logic in three directions.
To artificial memory systems. In CrystalCache, I port the two-dimensional memory framing to the KV-cache eviction problem in long-context LLM inference. A trunk-level score combining associative crystallization density and encoding impact outperforms H₂O, SnapKV, and ChunkKV across twelve model–task configurations.
To training-data statistics. While investigating why a learning-based stereo system (PIDS) failed on transparent surfaces, I identified a statistical mask inherent to Monte Carlo rendering: transparent regions have image variance 4.18× higher than non-transparent regions, stable to within 0.15% across a 16× sampling range. This became a science-fair paper, which took first place regionally and will represent Tainan at the national round in July 2026. A follow-up investigation (StaMask) shows that BatchNorm and L1 normalization in standard stereo architectures erase this signal before it reaches the matching layer — reframing one component of the sim-to-real gap from a learning-dynamics problem to an architectural one.
To architectural constraints on grounding. A parallel project on automatic speech recognition (CrystalASR) shows the same principle in a different modality: a strict upward information flow constraint — where higher layers can modulate but not override lower-level acoustic evidence — preserves phonetic grounding that end-to-end models lose. A language-model weight sweep reveals a sharp phase transition: beyond a narrow tiebreaker regime, lexical priors overwhelm acoustic evidence and word error rate rises from 17% to 96%. The empirical mirror of StaMask's finding: one shows what architectures lose, the other shows what strict constraints preserve.
To a runnable theory of emergence. The current work (Crystal Lattice Intelligence Engine) simulates crystallization dynamics on a fixed 2D torus: concept domains crystallize into connected structures, high-intensity unreferenced access deforms their boundaries, and the experiment is whether new coherent structures emerge from the collisions. This is the most direct computational test of the opening claim.
How I work
I work independently, without a faculty advisor, on self-funded GPU rentals. I keep complete development logs as a matter of practice — the longest currently runs to about 17,000 lines. I welcome correspondence from researchers whose work intersects any of the above: botimlin@gmail.com.