Back
Why I joined micro1
.webp)
Isabel Yishu Yang
Strategic Project Lead at micro1
In 2019, 5 months after launching ArbiLex, I attended an international arbitration conference in Atlanta. There was a panel on the “cutting edge” of legal tech, and I still remember the consensus answer: Zoom. Remote hearings felt like the frontier.
To the attendants of that conference, what we were building at ArbiLex felt almost sci-fi: an AI engine to predict arbitration and litigation outcomes.
Over the next six years, we scaled ArbiLex and proved a core thesis: legal texts can be structured into quantifiable data points, and even the high-stakes, long-duration disputes are statistically predictable. During the same period, I watched the legal industry’s posture toward AI shift. Slowly at first, then all at once: from cynical skepticism to genuine curiosity; from endlessly deferred budgets to dedicated procurement tracks.
The question is no longer whether lawyers will use AI. It's what we actually want AI to do in the practice of law. But unlike 2019, there is no more consensus.
Perhaps there is still consensus on what we don’t want: hallucinated citations, overconfident errors, and brittle reasoning. What’s harder, yet arguably more important, is defining what we do want. “The best answer” often starts with “it depends.” The practice of law depends on an artful mix of judgment calls: risk tolerance, strategy, forum, timing, client constraints. Two great lawyers can disagree, and both can be right.
That’s why I’m excited to join micro1 as a Strategic Projects Lead, leading legal expert data creation and legal research.
Here's the technical shift that made this move feel inevitable.
At ArbiLex, we used classical ML, causal inference and historical data to predict outcomes. "Ground truth" was comparatively clear: if we predicted a high probability of success and the case won, we were right. Models were trained to be “right”. Over time, our models would be more right than any lawyer at predicting case outcomes over a large sample of cases.
At micro1, the goal isn't prediction, but teaching LLM models judgment. Through RLHF, legal experts rank, refine, and reward model reasoning in real time. We’re no longer training models on binary win/loss outcomes; instead, we’re shaping how a model evaluates tradeoffs, spots issues, and explains decisions with the nuance of a skilled practitioner.
This is the layer the legal industry has been waiting for, whether it knows it yet or not. Prediction tells us what might happen. Judgment helps us decide what to do about it. If we get this right, we move beyond the eager-to-please drafting chatbots, and toward systems that can meaningfully support high-stakes, strategic decision-making - systems that know when uncertainty matters, when to ask for missing facts, and how to reason through consequences.
Knowing what’s at stake made the choice of micro1 easy. I joined micro1 for three reasons.
Velocity: the frontier moves fast; micro1’s young and nimble team moves as fast as the frontier labs we support.
The iteration engine: expert data is never one-and-done; micro1’s data infrastructure is built for rapid feedback loops with frontier labs.
Human-centered values: The "H" in RLHF stands for Human. Micro1's culture is rooted in the belief that the future of intelligence must stay grounded in the expertise only humans provide. Ultimately, the LLM models that win in law won't be the ones trained on the most data. They'll be the ones trained on the best human judgment.
I'm grateful for the ArbiLex chapter and everything our team built there. And I'm genuinely energized for what's next. If you're thinking about where legal AI goes from here, I'd love to connect.
More importantly, if you are a lawyer drawn to the high-impact alternative path of becoming an AI trainer, I sincerely ask that you consider partnering with micro1.
.webp)
%20(1).jpg)