LMQL is a SQL-like programming language for interacting with LMs. It takes a declarative approach to specifying the output constraints for a language model, with a SQL flavor.
marvin's @ai_model decorator implements something similar to what I had in mind for extracting structured data from an input to a language model.
Restricting the next predicted token to adhere to a specific context free grammar seems like a big step forward in weaving language models into applications.
Using system prompts provides an intuitive separation for input and output schema from input content.
With the support of GPT-4, I feel unstoppable. The overnight surge in productivity is intoxicating, not for making money or starting a business, but for the sheer joy of continuously creating ideas...
I wrote a few paragraphs disagreeing with Paul's take, asserting that, like Simon suggests, we should think of language models like ChatGPT as a “calculator for words”.