Spring AI: Turning LLM Text into Strongly-Typed Java Objects and Invoking Local Functions
Spring AI closes the gap betweeen an LLM’s free-form text and the strongly-typed world of Java. This article shows how the framework converts chat answers into POJOs and how it lets the model call local Java functions at runtime.
Mapping a Chat Answer to a Java Bean
Instead of receiving plain text you can ask Spring AI to give you a ready-made object.
@RestController
class MovieController {
private final ChatClient client;
@GetMapping("/filmography")
ActorFilms randomFilmography() {
return client.prompt()
.user("Pick a famous actor and list 5 of their movies.")
.call()
.entity(ActorFilms.class);
}
}
The entity method is a thin wrapper around two cooperating classes:
BeanOutputConverter– builds a JSON schema from the target class and a prompt that forces the model to return only valid JSON.- A Jackson-based converter that deserialises the JSON into the requested type.
Prompt Engineering Behind the Scenes
BeanOutputConverter produces a system message similar to:
Return raw JSON that matches the following schema and nothing else.
Do not wrap the response in markdown code blocks.
{...insert generated schema here...}
Spring AI derives the schema reflectively from ActorFilms.class. If the model still returns malformed JSON, a ConversionException is thrown and can be handled by the caller.
Letting the Model Call Your Code
Modern LLMs support "function calling": you describe a function, the model may decide to invoke it, and you feed the result back into the conversation. Spring AI automates the plumbing.
Declaring a Function
@Component("weatherFn")
public class WeatherService implements Function<WeatherRequest, WeatherResponse> {
public record WeatherRequest(String location, Unit unit) { }
public record WeatherResponse(int temp, String condition) { }
@Override
public WeatherResponse apply(WeatherRequest req) {
// call real weather API here
return new WeatherResponse(30, "Sunny");
}
}
Using the Functon in a Prompt
@PostMapping("/ask")
ChatData ask(@RequestParam String question) {
String answer = client.prompt()
.user(question)
.functions("weatherFn") // register the bean above
.call()
.content();
return new ChatData("text", answer);
}
If the user asks "What’s the weather in Paris?", the framework:
- Attaches a
toolssection to the request describingweatherFn. - When the model replies with
finish_reason=tool_calls, Spring AI:- Parses the arguments JSON into
WeatherRequest. - Invokes the Java method.
- Appends the
WeatherResponseto the conversation. - Re-sends the enlarged prompt to the model.
- Parses the arguments JSON into
- The loop stops when the model produces a final text answer.
Recursive Tool Execution
The loop is handled in ChatClient#call:
if (response.requiresToolCall()) {
List<Message> extended = handleToolCalls(prompt, response);
return call(new Prompt(extended, prompt.getOptions()));
}
Multiple tools can be registered and the model may call several of them in one turn; the framework keeps invoking until the model is satisfied.
Take-away
Spring AI treats an LLM as a remote service that sometimes needs structured input and sometimes needs to execute local logic. With entity() you get type-safe data back; with functions() you give the model the ability to reach into your JVM. Both patterns let you weave AI into ordinary Spring applications without sacrificing Java’s compile-time guarantees or runtime robustness.