Supercharge Your Java Application with Generative AI
The introduction of Generative AI has been revolutionary, pushing the envelope of what is possible. For example, one can find a solution to a problem by prompting a large language model, translate different languages without a translator, or create professional-grade drawings without knowing how to draw or enlisting the help of a professional artist.
We will use this library, which contains the client call OpenAI’s api and the related GraphQL schema and data objects. In this article we will demonstrate how to setup your Java Application so you can call OpenAi’s api and expose these calls via GraphiQL (GraphQL’s builtin UI).
by Yi leng Yao
Requirements
Setting Up Your OpenAI API?Key
export OPENAI_API_KEY="your api-key"
Your Spring Boot application will retrieve the OpenAI Api key from the environment variable, and use the api key to call OpenAI’s API.
(Optional) Creating Spring Boot?Project
If you already have your own Java application, you can skip this step. However, for those who don’t:
./mvnw spring-boot:run
Setting up your Spring Boot?Project
Adding Dependencies
You will need to add the following dependencies to your pom.xml
<!-- openai client dependency -->
<dependency>
<groupId>io.github.yilengyao</groupId>
<artifactId>openai</artifactId>
<version>1.0.0</version>
</dependency>
<!-- graphql dependency -->
<dependency>
<groupId>com.netflix.graphql.dgs</groupId>
<artifactId>graphql-dgs-spring-boot-starter</artifactId>
<version>7.3.6</version>
</dependency>
<dependency>
<groupId>com.graphql-java</groupId>
<artifactId>graphql-java-extended-scalars</artifactId>
<version>20.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
Compatibility Note:
Then run maven package to download the dependency
./mvnw clean package
Setting Up GraphQL Datafetcher in Your Spring Boot Application
To integrate GraphQL with your Spring Boot application, you’ll create a Datafetcher component. This component will interact with your OpenAI client and fetch data as required.
public interface ApplicationSpecificSpringComponentScanMarker {
}
import com.netflix.graphql.dgs.DgsComponent;
import io.github.yilengyao.openai.client.OpenAiClient;
@DgsComponent
public class OpenAiDataFetcher {
private final OpenAiClient openAiClient;
@Autowired
public OpenAiDataFetcher(OpenAiClient openAiClient) {
this.openAiClient = openAiClient;
}
}
Configuring Your Application to Scan Datafetcher Class and OpenAI-Java Package
Here’s how you can set up the application to scan for the OpenAiDataFetcher class and the classes within the io.github.yilengyao.openai.configuration package:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication(
scanBasePackageClasses = {
io.github.yilengyao.openai.configuration.ApplicationSpecificSpringComponentScanMarker.class,
<your package>.graphql.ApplicationSpecificSpringComponent
}
)
public class <Main-Class-Name> {
public static void main(String[] args) {
SpringApplication.run(<Main-Class-Name>.class, args);
}
};
Spring’s Stereotype Annotations and Package Scanning:
When you configure Spring to scan specific packages, it looks for classes within those packages that have Stereotype Annotations. For our setup:
The presence of these Stereotype Annotations signals to Spring that the class should be managed as a bean, and inject them where needed in your application.
Integrating With OpenAI’s Endpoints
Audio
Create Translation
OpenAI provides an endpoint that can translate audio content of another language into English. Here’s how to integrate this functionality into your Spring Boot application using GraphQL:
@DgsMutation
public TextResponse createTranslation(
@InputArgument("file") MultipartFile file,
@InputArgument("translationInput") TranslationInput translationInput) throws IOException {
return openAiClient.createTranslation(TranslationPayload.fromGraphQl(file, translationInput))
.toGraphQl();
}
spring.servlet.multipart.max-file-size=25MB
spring.servlet.multipart.max-request-size=25MB
MultipartFile is Spring framework’s representation of an unloaded file. By default maximum upload size is 1MB. Since OpenAI currently supports maximum file size upload of 25MB, we update the MultipartFile size limit to 25MB.
You can launch your Spring Boot by executing the following command in your terminal.
./mvnw spring-boot:run
Now we can call the translation endpoint using GraphQL:
mutation CreateTranslation($file: Upload!) {
createTranslation(
file: $file,
translationInput: {
model: "whisper-1",
prompt: "translate to english"
}
) {
text
}
}
Uploading the Audio File:
Handling CSRF Prevention Header?Error:
If you encounter a CSRF prevention header error in Altair
expecting a csrf prevention header but none was found, supported headers are [apollo-require-preflight, x-apollo-operation-name, graphql-require-preflight
follow these steps:
Create Transcription
OpenAI provides an endpoint that can transcribe audio content into text based on the input language. Here’s how to integrate this functionality into your Spring Boot application using GraphQL:
@DgsMutation
public TextResponse createTranscription(
@InputArgument("file") MultipartFile file,
@InputArgument("transcriptionInput") TranscriptionInput transcriptionInput) throws IOException {
return openAiClient.createTranscription(TranscriptionPayload.fromGraphQl(file, transcriptionInput))
.toGraphQl();
}
You can launch your Spring Boot by executing the following command in your terminal.
./mvnw spring-boot:run
Now we can call the transcription endpoint using GraphQL:
领英推荐
mutation CreateTranscription($file: Upload!) {
createTranscription(file: $file,
transcriptionInput: {
model: "whisper-1",
prompt: "Optional prompt text here",
language: "en" # Optional language code
}) {
text
}
}
Uploading the Audio File:
Models
OpenAI offers an endpoint to list and describe the various models available. Here’s how to integrate this functionality into your Spring Boot application using GraphQL:
@DgsQuery
public ModelsOutput models(
@InputArgument("id") Optional<String> id) {
return id.isPresent()
? openAiClient.models(id.get()).toGraphQl()
: openAiClient.models().toGraphQl();
}
You can launch your Spring Boot by executing the following command in your terminal.
./mvnw spring-boot:run
Then go to https://localhost:8080/graphiql
Listing Modes using?GraphiQL
query AllModels {
models {
... on OpenAiResponse {
data {
id
object
created
ownedBy
permission {
id
object
created
allowCreateEngine
allowSampling
allowLogProbs
allowSearchIndices
allowView
allowFineTuning
organization
group
isBlocking
}
root
parent
}
object
}
}
}
Retrieving a Specific?Model
Retrieves a model instance, providing basic information about the model such as the owner and permission.
query Model {
models(id: "babbage") {
__typename
... on Model {
id
object
created
ownedBy
permission {
id
object
created
allowCreateEngine
allowSampling
allowLogProbs
allowSearchIndices
allowView
allowFineTuning
organization
group
isBlocking
}
root
parent
}
}
}
Completions
OpenAI’s Completions endpoint provides responses based on given text prompts. Here’s how to integrate this functionality into your Spring Boot application using GraphQL:
@DgsMutation
public CompletionOutput completion(
@InputArgument("completionInput") CompletionInput completionInput) {
return openAiClient
.completion(CompletionPayload.fromGraphQl(completionInput))
.toGraphQl();
}
After integrating the necessary dependencies, start your Spring Boot application with:
./mvnw spring-boot:run
Then go to https://localhost:8080/graphiql
Using GraphQL to Query:
mutation Completion {
completion(completionInput: {
model: "text-davinci-003",
prompt: "Why are cats so cute?",
max_tokens: 73,
temperature: 2,
top_p: 1,
n: 4,
stream: false,
stop: "\n",
}) {
id
object
created
model
choices {
text
index
logprobs {
tokens
token_lobprobs
top_logprobs {
key
value
}
text_offset
}
finish_reason
}
usage {
prompt_tokens
completion_tokens
total_tokens
}
}
}
Chat
OpenAI’s Chat endpoint provides dynamic responses based on text prompts. This can simulate a back-and-forth conversation with the model. Here’s how to incorporate this into your Spring Boot application:
@DgsMutation
public ChatCompletionResult chatCompletion(
@InputArgument("chatInput") ChatCompletionInput chatInput) {
if (chatInput.getStream() != null && chatInput.getStream()) {
return openAiClient
.streamChatCompletion(ChatCompletionPayload.fromGraphQl(chatInput))
.next()
.map(ChatCompletionChunk::toGraphQl)
.block();
} else {
return openAiClient
.createChatCompletion(ChatCompletionPayload.fromGraphQl(chatInput))
.toGraphQl();
}
}
./mvnw spring-boot:run
Then go to https://localhost:8080/graphiql
Querying with GraphQL:
mutation ChatCompletion {
chatCompletion(chatInput: {
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content: "Who is the greatest woccer player."
},
{
role: "user",
content: "How bad is Harry Maguire?"
}
],
stream: false
}) {
... on ChatCompletion {
id
object
create
model
choices {
index
message {
role
content
function_call {
name
arguments
}
}
}
usage {
prompt_tokens
completion_tokens
total_tokens
}
}
... on ChatCompletionChunk {
id
object
created
model
choices {
index
delta {
role
content
function_call {
name
arguments
}
}
}
}
}
}
Generating and Editing?Images
OpenAI offers endpoints that allow you to create images from textual prompts and edit existing images.
Create Image?Endpoint
This endpoint allows you creates an image with a prompt.
Here’s how to incorporate this into your Spring Boot application:
@DgsMutation
public ImageResponse createImage(
@InputArgument("createImageInput") CreateImageInput createImageInput) {
return openAiClient
.createImage(CreateImagePayload.fromGraphQl(createImageInput))
.toGraphQl();
}
Running the Application:
After updating dependencies, start your Spring Boot app:
./mvnw spring-boot:run
Navigate to https://localhost:8080/graphiql and run the following mutation to generate an image:
mutation CreateImage {
createImage(createImageInput: {
prompt: "Supercharge you Java Application with Generative AI",
# responseFormat: B64_JSON,
n: 5,
size: X256
}) {
createdAt
data {
url
b64Json
}
}
}
Edit Images
This endpoint allows you to edit an existing image and a prompt.
Here’s how to incorporate this into your Spring Boot application:
To add the Images endpoint, in the OpenAiDataFetcher class add
@DgsMutation
public ImageResponse createImage(
@InputArgument("createImageInput") CreateImageInput createImageInput) {
return openAiClient
.createImage(CreateImagePayload.fromGraphQl(createImageInput))
.toGraphQl();
}
Now we can call the edit image endpoint using GraphQL:
mutation EditImage($image: Upload!) {
editImage(
image: $image,
# mask: $mask,
editImageInput: {
prompt: "Turn into winter wonderland",
n: 3,
size: X512,
# responseFormat: URL,
# user: "user"
}
) {
createdAt,
data {
url
b64Json
}
}
}
Uploading the Image File:
You can read more about the Java library that we use to integrate with OpenAI
Happy Developing! ??.
Related Medium article