Synthesizing Verifiable Code for Large Specifications Using Few-Shot Learning

Benedict Böttger

In this work, we present a novel approach which leverages the few-shot learning capabilitiesof Large Language Models(LLMs) to address the long-standing challenge of synthesizing digital circuits from linear-time temporal logic (LTL) specifications. We concentrate on parameterized specifications in this work, which are common in hardware specifications, as they can describe a system of arbitrary size dependent on one or more parametervalues. Owing to the computational complexity (2EXPTIME-complete), classic synthesis algorithms are not able to scale beyond relatively small parameter values. Using implementations that fulfill the specification for lower parameter values, we will task different LLMs with generating a satisfying implementation for a larger parameter value. The smaller implementations can be either generated by classical synthesis tools, which perform well on smaller specifications, or by using human-written solutions, in a hybrid approach. For the hardware representation we directly target the Verilog hardware description language instead of a more low-level representation like AIGER, in order to leverage the expressiveness of the language in combination with LLMs. We demonstrate that this approach can, in some cases, successfully synthesize correct Verilog code using the examples given as well the LTL formula. This works especially well if based on human-written solutions. In the successful instances, this approach can scale to several orders of magnitude beyond what classical LTL synthesis tools like Strix or BoSy can do on their own. Intriguingly, in a limited number of cases, the LLM is able to accomplish this task zero-shot, not requiring any example implementations. While this method is not consistent enough to be used on its own, we think that a hybrid approach may prove useful, by integrating it with existing tools or a programmer’s workflow.

Bachelor Thesis.

(pdf)