Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Spring AI: Turning LLM Text into Strongly-Typed Java Objects and Invoking Local Functions

Tech May 16 1

Spring AI closes the gap betweeen an LLM’s free-form text and the strongly-typed world of Java. This article shows how the framework converts chat answers into POJOs and how it lets the model call local Java functions at runtime.

Mapping a Chat Answer to a Java Bean

Instead of receiving plain text you can ask Spring AI to give you a ready-made object.

@RestController
class MovieController {

    private final ChatClient client;

    @GetMapping("/filmography")
    ActorFilms randomFilmography() {
        return client.prompt()
                     .user("Pick a famous actor and list 5 of their movies.")
                     .call()
                     .entity(ActorFilms.class);
    }
}

The entity method is a thin wrapper around two cooperating classes:

  1. BeanOutputConverter – builds a JSON schema from the target class and a prompt that forces the model to return only valid JSON.
  2. A Jackson-based converter that deserialises the JSON into the requested type.

Prompt Engineering Behind the Scenes

BeanOutputConverter produces a system message similar to:

Return raw JSON that matches the following schema and nothing else.
Do not wrap the response in markdown code blocks.
{...insert generated schema here...}

Spring AI derives the schema reflectively from ActorFilms.class. If the model still returns malformed JSON, a ConversionException is thrown and can be handled by the caller.

Letting the Model Call Your Code

Modern LLMs support "function calling": you describe a function, the model may decide to invoke it, and you feed the result back into the conversation. Spring AI automates the plumbing.

Declaring a Function

@Component("weatherFn")
public class WeatherService implements Function<WeatherRequest, WeatherResponse> {

    public record WeatherRequest(String location, Unit unit) { }
    public record WeatherResponse(int temp, String condition) { }

    @Override
    public WeatherResponse apply(WeatherRequest req) {
        // call real weather API here
        return new WeatherResponse(30, "Sunny");
    }
}

Using the Functon in a Prompt

@PostMapping("/ask")
ChatData ask(@RequestParam String question) {
    String answer = client.prompt()
                          .user(question)
                          .functions("weatherFn")   // register the bean above
                          .call()
                          .content();
    return new ChatData("text", answer);
}

If the user asks "What’s the weather in Paris?", the framework:

  1. Attaches a tools section to the request describing weatherFn.
  2. When the model replies with finish_reason=tool_calls, Spring AI:
    1. Parses the arguments JSON into WeatherRequest.
    2. Invokes the Java method.
    3. Appends the WeatherResponse to the conversation.
    4. Re-sends the enlarged prompt to the model.
  3. The loop stops when the model produces a final text answer.

Recursive Tool Execution

The loop is handled in ChatClient#call:

if (response.requiresToolCall()) {
    List<Message> extended = handleToolCalls(prompt, response);
    return call(new Prompt(extended, prompt.getOptions()));
}

Multiple tools can be registered and the model may call several of them in one turn; the framework keeps invoking until the model is satisfied.

Take-away

Spring AI treats an LLM as a remote service that sometimes needs structured input and sometimes needs to execute local logic. With entity() you get type-safe data back; with functions() you give the model the ability to reach into your JVM. Both patterns let you weave AI into ordinary Spring applications without sacrificing Java’s compile-time guarantees or runtime robustness.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.