Devoxx Invoice Generator V2
MidJourney generated

Devoxx Invoice Generator V2

Based on the community feedback I decided to redevelop the Devoxx Invoice Generator application using "pure" JDBC and DB Generator instead of JPA with Java logic to increment the invoice number.

No alt text provided for this image
The entry method

We start by injecting the DataSource which uses HikariDataSource underneath. This is important when discussing the stats anomaly later on.

No alt text provided for this image
The JDBC invoce number logic

The getInvoiceNumber is where the "magic" happens. Here we get the next available invoice number from the Database Sequence generator. I've included the actual SQL statement as a comment above the code.

Liquibase config

The related Liquibase config looks as follows:

No alt text provided for this image
The Liquibase config

When running 'siege' for the first time (cold) on this implementation

?siege -c200 -t10s --content-type "application/json" 'https://localhost:8080/api/invoice/99 POST "invoice_generation"'?        

we get the following results

No alt text provided for this image
Siege with Cold "JDBC" Server

However if I run siege a second time on the same running server the results are much higher. Cold Vs Hot server probably related to the underlying datasource "filling up" ?

No alt text provided for this image
Siege results with Hot JDBC Server

Because of these results I re-run the JPA siege stats with a cold and hot server and compared them with the JDBC results.

No alt text provided for this image

As you can see the JDBC and DB Sequencer results are still A LOT FASTER !!

So with 600 tx/s I think we're ready to welcome our Devoxxians to ALF.IO registration. Now lets hope that the Kubernetes setup is stable and fast enough to handle the registration burst we'll get on August 16th at 9am CEST ??

Comments and suggestions are always welcome!

Stephan

Addendum : DB Sequence Cache

It was brought to my attention that PostgreSQL has a cache option for the CREATE SEQUENCE query.

No alt text provided for this image
The optional clause?CACHE?cache?specifies how many sequence numbers are to be preallocated and stored in memory for faster access. The minimum value is 1 (only one value can be generated at a time, i.e., no cache), and this is also the default.

Strangely enough adding a sequence with a cache of 2000 had no impact on the performance!? #SuggestionsWelcome

No alt text provided for this image
The Java method to create a sequence based on give event id


It has also been brought to my attention that using Spring JdbcTemplate (a level of abstraction) instead of "pure"Java. It would reduce the code complexity (approx 50% less code) but maybe a marginal performance hit.

回复
Danilo Pereira De Luca

Software Craftsman | Distributed Platforms Architect | Mentorship | Java

1 年

That's awesome! I've been using jdbc for a while exactly because of some performance issues too! I don't know if I lost something, but why do you need to use a sequence? Wouldn't an auto increment column work in this case?

回复

要查看或添加评论,请登录

Stephan Janssen的更多文章

  • LLM Inference using 100% Modern Java ????

    LLM Inference using 100% Modern Java ????

    In the rapidly evolving world of (Gen)AI, Java developers now have powerful new (LLM Inference) tools at their…

    5 条评论
  • Basketball Game Analysis using an LLM

    Basketball Game Analysis using an LLM

    I asked OpenAI's ChatGPT and Google Gemini to analyze some game snapshots, and it's incredible how well they break down…

    5 条评论
  • The Power of Full Project Context #LLM

    The Power of Full Project Context #LLM

    I've tried integrating RAG into the DevoxxGenie plugin, but why limit myself to just some parts found through…

    14 条评论
  • Using LLM's to describe images

    Using LLM's to describe images

    I've already worked on face recognition many years ago, so the natural next step is to use a Large Language Model (LLM)…

    1 条评论
  • Devoxx Genie Plugin : an Update

    Devoxx Genie Plugin : an Update

    When I invited Anton Arhipov from JetBrains to present during the Devoxx Belgium 2023 keynote their early Beta AI…

    1 条评论
  • MLX on Apple silicon

    MLX on Apple silicon

    "MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research…

    1 条评论
  • Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    The current "AI Assistant" plugin for IntelliJ operates exclusively online, as it leverages a cloud-based GPT-4…

    8 条评论
  • Jlama : LLM meets Java (Vector)

    Jlama : LLM meets Java (Vector)

    Jlama is the first #LLM project I've come across which is entirely developed in #Java, leveraging the jdk.incubator.

    3 条评论
  • Running LLM locally using Java...

    Running LLM locally using Java...

    There are several options, but looks like #Ollama is getting a lot of traction! Ollama chat Ollama allows you to…

    5 条评论
  • Langchain4J using Redis

    Langchain4J using Redis

    I've developed a proof-of-concept that leverages Langchain4J for semantic search in combination with Redis as the…

    3 条评论

社区洞察

其他会员也浏览了