Yuhang's Blog

My Experience with Mathematical Typesetting: Introducing Lazy LaTeX

2025-12-03 Mathematics

  1. 1. Typst (and why I abandoned it)
  2. 2. Writing LaTeX with snippets (and what feels suboptimal)
  3. 3. It’s 2025. Why not LLM? (aka Lazy LaTeX Mathematics)
  4. 4. Philosophy: to make AI disappear

TL;DR: I made a VS Code extension that lets you write math in natural language and have it turned into LaTeX automatically. Skip the first two sections if you only care about the extension.

I don’t know why the world always gets stuck with the bad designs: English (mystical spelling rules), QWERTY keyboards (designed to slow down typing speed), JavaScript (an ill-named language finished in 10 days), and LaTeX. You can stop reading now if you have never forgotten to write a backslash in front of the curly brackets when you mean to write a set, or you have always remembered how to type a piecewise-defined function with ease.

Typst (and why I abandoned it)

I have been unhappy with LaTeX for a long time, and that’s why I used to be a big fan of the new typesetting language, Typst, and tried to convince everyone I knew to try it. I even wrote my entire master’s dissertation with Typst. The experience itself was rather nice; the cleaner syntax boosted my overall productivity, and I was able to draw nice diagrams using CeTZ. I made a (very simple) tikzcd fork to draw commutative diagrams back then; now you should directly use quiver as they already support Typst. I also recall having to ask people to fix their newly finished packages (like here for wordometer so that I could add a word-count) but the problems were solved rather quickly. Since my department only asked for a PDF file of the dissertation, I had no problem in receiving my degree.

The only problem is when I tried to submit my work to arXiv and realized that they only accept LaTeX scripts (submitting the PDF only was acceptable but certainly less desirable). The conversion from Typst to LaTeX for complicated documents is rather painful. Despite pandoc (and some other online tools like this) ostensibly offering such a function, if your document contains quite a few imported or self-defined environments or variables, or if it has multiple files and cross-references each other, or even worse if you have coded diagrams, pandoc won’t really work. You probably need to rely on regular expressions for some pre- and post-cleaning for pandoc to work, and you certainly have to manually fix stuff before asking your favorite LLM to help you convert the diagrams.

Then I realized that the situation for me is characterized by the three following axioms:

  1. If I work in academia, I still have to produce LaTeX documents for publication compatibility and for collaboration with other writers.
  2. There is no way for me to have to convert anything from Typst to LaTeX again; that’s simply a nightmare.
  3. I don’t really want to use two different typesetting languages at the same time; the muscle memories would work against each other.

The simple proposition is that I have to go back to use LaTeX.

Writing LaTeX with snippets (and what feels suboptimal)

The good news is that over the years people have come up with ways to write LaTeX in a less painful way. In particular, one may use snippets in code editors so that commonly used code pieces can be produced with much fewer key strokes; essentially, snippets allow you to effectively simplify the LaTeX syntax. The main reference point is this post, which discusses how to use Vim and its extension UltiSnips to write LaTeX, allegedly as fast as the lecturer writes on the board. For the less tech-savvy user (i.e. non-Vim-user), an alternative is to use the VS Code extension HyperSnips. There are quite a few existing snippet repositories (e.g. here) that can be used out of the box. This has been a rather useful approach to me personally and I have been updating my own snippets according to my own needs.

One problem that I gradually realize while I use snippets is that you have to constantly create new snippets; but a lot of them may only be used ‘locally’, i.e. in a small section of the document where they are mentioned the most frequently. For example if you work on a document on TQFT then you probably have to type things like n\mathrm{Cob}_2 \to 2\mathrm{Vect} (i.e. $n\mathrm{Cob}_2 \to 2\mathrm{Vect}$). If you use snippets (like this) then typing Cobrm will produce \mathrm{Cob} for you. However, if things like n\mathrm{Cob}_2 or 2\mathrm{Cob}^{\mathrm{ext}} appear too many times in the discussion, you probably want to add Cob as a snippet that gives you \mathrm{Cob}, and ext as a snippet giving ^{\mathrm{ext}}. But maybe 2\mathrm{Cob}^{\mathrm{ext}} is only something that is discussed in a certain paper and you won’t really use this notation again, and soon enough you will have too many rarely used snippets competing against each other. The current solution would be to manually keep track of a context, in which only a certain part of the snippets are activated.

So here, the problem is how you type something that has been mentioned in the context with ease and probably without having to type all of its decorations. Consider another example:

Let $\mathcal{C}$ be a locally small category. Then $h_A := \mathrm{Hom}(A, -)$ is a functor $\mathcal{C} \to \mathbf{Set}$. The Yoneda Lemma states that for any functor $F : \mathcal{C} \to \mathbf{Set}$, there is a bijection

$$
\mathrm{Nat}(h_A, F) \cong F(A) .
$$

Here $\mathcal{C}$ has been defined as a category and we wish to use \mathcal to decorate it all the time. (Usually the undecorated $C$ isn’t even discussed.) Similarly we want $\mathbf{Set}$ to have the bold font face \mathbf all the time. Again using snippets you type Ccal for every occurrence of $\mathcal{C}$; and you may define, in this case, a snippet like Cc that gives you \mathcal{C} directly. But it still feels suboptimal.

It’s 2025. Why not LLM? (aka Lazy LaTeX Mathematics)

The key observation is that when you talk to LLMs like ChatGPT you never have to type LaTeX formulas strictly following the syntax. You may say things like integral of x from 0 to 1/2 1/(x (x+1)) dx and the LLM will (usually) respond to you perfectly.

This implies that you may do the same thing when you are working on a LaTeX (or Markdown) file that contains math formulas, as long as you have an LLM that helps you convert things automatically. In other words, one can replace the obtuse LaTeX syntax with a much fuzzy and flexible one with the help of LLM; in fact, this fuzzy, natural-language-based ‘syntax’ can be even simpler and looser than Typst.

Based on this idea, I made a VS Code extension named Lazy LaTeX (find it on the Extension Marketplace) that does exactly that. You can find how to use it in the documentation, but long story short, you can plug in your favorite LLM API and start to write things like this:

1
2
3
4
Let ;;cal C;; be a locally small category. 
Then ;;h_A := rm Hom(A, -);; is a functor ;; C -> bf Set;;.
The Yoneda Lemma states that for any functor ;;F : C -> Set;;,
there is a bijection ;;;rm Nat(h_A, F) iso F(A);;;.

Then each time after you press Enter, it becomes exactly what you’d expect:

1
2
3
4
5
6
7
Let $\mathcal{C}$ be a locally small category. 
Then $h_A := \mathrm{Hom}(A, -)$ is a functor $\mathcal{C} \to \mathbf{Set}$.
The Yoneda Lemma states that for any functor $F : \mathcal{C} \to \mathbf{Set}$,
there is a bijection
$$
\mathrm{Nat}(h_A, F) \cong F(A) .
$$
  • The wrapper ;; ... ;; gives inline math and
  • ;;; ... ;;; display math.

Note that the LLM can complete the decorators for C and Set based on the context so you don’t have to type them every time.

The extension is designed to have context awareness, i.e., it can be configured to have access to previous lines of the document you are working on, so that it knows, for example, the current notational conventions. You may also give the LLM additional directions by adding a text file .lazy-latex.md in your project root directory, or by modifying the extension settings.

One must acknowledge that LLMs are not designed to be error-free. However my point of view is that this use case of LLMs is justifiable. First, only things in wrappers ;; ... ;; and ;;; ... ;;; will be modified by the LLM, so this extension never prevents you from writing LaTeX in the usual wrappers (like $ ... $) (possibly using the snippet extensions mentioned above) and never interferes with that. Second, since the formulas will be rendered by your LaTeX compiler and you can actually see in the document if the formulas are as expected (you have to do that anyway even if you write orthodox LaTeX code), any error should be easy enough to spot. Third, as of 2025, LLMs don’t really make a lot of mistakes in LaTeX code generation, as long as you use a decent model. LLM APIs will cost you a small amount money, and you’re free to decide if that’s worth it for you.

(One subtle point here is that thanks to a large amount of existing LaTeX codebase, LLMs are in general better trained to generate LaTeX code than Typst. So the same idea may not work as well with Typst. Ironically, one potential pathway is to use mitex to write Lazy LaTeX in Typst; but I haven’t implemented that.)

One possible workflow with Lazy LaTeX is like this.

  • You have a decent amount of snippets for most of the basic things.
  • When you define a new variable, or when you write the first complicated equation (followed by equally complicated but similar equations), you write the LaTeX code with snippets as usual.
  • When you have to repeatedly mention a variable, or to repeatedly produce similar equations, you use the Lazy LaTeX wrappers so that the LLM can produce them for you based on what you have typed.
  • When you don’t remember how to type a strange symbol or some big environment (e.g., matrices, piecewise-defined functions), just describe it to the LLM and let it generate it for you.

However, I imagine it is also possible to entirely get rid of LaTeX in writing mathematical formulas if you insist using Lazy LaTeX only.

I’m still experimenting with this new workflow in my LaTeX writing. So far there’s been a noticeable boost in productivity — it’s helped me forget about LaTeX and focus on the mathematics itself. I’ll be updating this extension as I use it, and I’ll record any fun or unexpected usage. In the meantime, if you experiment with it, I’d love to hear your thoughts or see examples of how you use it.

Philosophy: to make AI disappear

I could end this post here, but I feel the need to answer the following question: The idea of using LLM to help with LaTeX writing isn’t anything new, so why bother making another tool like Lazy LaTeX? The short answer is simple: the existing ones just aren’t seamless enough.

The most similar tool is AI LaTeX helper, another VS Code extension. Its functionality seems almost identical at a first glance. But notice how the workflow is different:

  • In AI LaTeX Helper, you pause and wait for completions inside the LaTeX editor. I don’t like to wait while writing, as it breaks the rhythm completely. It’s better to let the AI work quietly in the background while I keep writing the next line, and check the rendered output later.
  • There’s no clear syntactical distinction between ‘AI LaTeX’ and ordinary LaTeX. I want that distinction. It helps me maintain two mental states: one for human-friendly shorthand, one for conventional syntax. The switch between them should be intentional, not accidental.
  • There is no context-awareness at all, and you can’t give customized directions. So the LLM doesn’t have access to your current notational conventions and definitions of variables etc., and you can’t perform the trick of dropping \mathcal every time you write the C. Lazy LaTeX reads the surrounding context so that you can.

In Lazy LaTeX, we treat LLM prompts as a new set of syntax, running in parallel with traditional LaTeX. The LLM acts as a compiler that converts the human-readable prompts to structured LaTeX. The LLM is not a writing assistant that interrupts you for feedback; it’s a silent background process that simply gets its job done. In other words, Lazy LaTeX isn’t trying to make the writer talk to the AI; rather, it’s trying to make the AI disappear. The goal is a workflow so natural that you forget there’s a model involved at all. And that summarizes the key difference between Lazy LaTeX and other existing AI enhancements for LaTeX.

This article was last updated on days ago, and the information described in the article may have changed.