The Devoxx Belgium CFP results & AI
Above image was generated using CrAIyon, formerly know as DALL-E mini

The Devoxx Belgium CFP results & AI

Last Friday we closed the CFP for Devoxx Belgium 2022

We received 700 proposals from 516 potential speakers! Luckily we limited the number of proposals per speaker to maximum 3 submissions, otherwise I'm convinced we would have again exceeded the 1K mark.

Before the review process starts I wanted to do some overview of the submitted talks and place everything in perspective. However after creating some tag clouds I was hacking away with GPT-3 to generate summaries, keywords etc. ??

Let's have a closer look...

BTW You can also help reviewing the submitted CFP talks by signing up @ https://bit.ly/review-talks

Proposals by Tracks

The obvious stat you want to see as the program chair of Devoxx Belgium are the proposals per track :

No alt text provided for this image

"Architecture", "Development Practices" and "Build & Deploy" track received the most proposals, which makes kind of sense because Devoxx Belgium welcomes mainly senior developers. I can imagine that's what more experienced engineers are tackling on a daily basis.

Of course I'm glad to see that the "Java" & "Server Side Java" tracks are on the 4th and 5th position. The DNA of Devoxx is still heavily linked to the Java eco-system, unfortunately "UI & UX" less so ??

Proposals by Sessions

Obviously the conference received the most proposals, 500+ proposals for about 90 slots (keep in mind that some sponsors also receive a speaking slot as part of their sponsor package, that's why you see 111 available slots).

No alt text provided for this image

I might consider dropping some Deep Dive schedule slots this year and replace them with some conference talks on day two of the event. This way we can welcome more speakers and provide more content for our Devoxxians. #TBD

Proposal Tags

Excited to see that the "Java" tag is still the most used (97 times) in all the submitted proposals, followed by Kubernetes.

What surprised me this year is to see both "Security" (3) and "Security best practices" (15) in the top-20 list, must be all these SecDevOps advocates pushing their passion ??

Nice to see the raising stars like Quarkus & GraalVM and general Cloud Native applications. And of course Spring & Spring Boot with JakartaEE also making the top-20, great!

No alt text provided for this image

Proposals by Company

No alt text provided for this image

Again Red Hat & IBM and Oracle in the top-3 followed by Google and Microsoft.

No real surprises there but what did disappoint me are the Oracle submissions are all from the developer advocates, no submissions from the actual core Java engineers ?? Probably having JavaOne the week after Devoxx Belgium didn't help and of course the anxiety (or company regulations) to travel "during" a pandemic did not help either.


-----

Now for the fun part... GPT-3 integration

We limit the proposal abstract to 1500 characters, but wouldn't it be nice to also have a summary when the speaker used every available character in his/her abstract?

OpenAI has a GPT-3 (Generative Pre-trained Transformer) service which allows you to summarise text using different language models that uses deep learning to produce human-like text.?

The most capable model (and most expensive) in the GPT-3 series is currently text-davinci-002. It can perform any task the other GPT-3 models can (complex intent, cause and effect, creative generation, search, summarisation for audience) and often with less context.

OpenAI provides a very simple REST interface where you provide the model you want to use, the text you want to process and your personal API token etc.

curl https://api.openai.com/v1/completions \
? -H "Content-Type: application/json" \
? -H "Authorization: Bearer $OPENAI_API_KEY" \
? -d '{
? "model": "text-davinci-002",
? "prompt": "Envoy is an open-source edge and service proxy that was designed for cloud-native applications. This hands-on introduction of Envoy proxy fundamentals is for anyone starting with their Envoy journey. \n\nLooking under the hood of the Envoy proxy and glancing at the configuration that powers it can overwhelm anyone. There’s a lot of stuff there! The Envoy documentation is comprehensive, but it can be challenging to navigate through it.\n\nIn this workshop, Peter will introduce the Envoy fundamentals and basic building blocks that make Envoy tick, answering questions such as What are listeners? What are filters? How do they work, and how should we configure them?\n\nAfter the theoretical introduction, we’ll put the concepts into practice and demonstrate how to configure traffic routing, outlier detection, and TLS, and how to get started with extending Envoy using Wasm.\n\nTl;dr",
? "temperature": 0.7,
? "max_tokens": 60,
? "top_p": 1,
? "frequency_penalty": 0,
? "presence_penalty": 0
}'        

The "temperature" parameter allows the GPT-3 service to return different text responses if it's not set to 1.

For example let's take the proposal from Eitan Suez and Peter Jausovec?on Envoy (shown below) and lets see what the different GPT-3 models respond.

No alt text provided for this image

text-davinci-002 model :

No alt text provided for this image

text-curie-001 model :

No alt text provided for this image

text-babbage-001 model :

No alt text provided for this image

text-ada-001 model :

No alt text provided for this image

Pretty good, right?

Some AI bias does exist. For example when I generated a summary for the proposal below, the summary ended with "...logging in Python." instead of Java. Probably because the actual abstract didn't mention any programming language.

No alt text provided for this image

For a brief summary I was most pleased with text-curie-001 also because it's one of the fastest models and with the lowest cost. So I integrated that one in CFP.DEV and can now generate proposal summaries for long abstracts ??

No alt text provided for this image

One last thing...

OpenAI also has a beta service named "Codex" which you have probably already used indirectly via GitHub Copilot.

The Codex models are descendants from the GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.? The models are most capable in Python and proficient in over a dozen languages including Java, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.

Codex allows you to select a programming language, provide normal text of the application you want to generated and you press the submit button. Let me show you this in action...

So instead of searching StackOverflow, you select your programming language and provide the text of the algorithm, function or application you want to generate and press Submit.

We are living in the future !!

BTW If someone from OpenAI is reading the post, please contact me if you're interested in speaking at this years Devoxx Belgium event ?? ????

要查看或添加评论,请登录

Stephan Janssen的更多文章

  • 10K+ Downloads Milestone for DevoxxGenie!

    10K+ Downloads Milestone for DevoxxGenie!

    I'm excited to share that DevoxxGenie has hit a major milestone: over 10,000 downloads! The actual number is likely…

    2 条评论
  • Running the full DeepSeek R1 model at Home or in the Cloud?

    Running the full DeepSeek R1 model at Home or in the Cloud?

    The DeepSeek R1 model, a massive 671B parameter Mixture-of-Experts (MoE) model, demands significant computational…

    7 条评论
  • Large Language Models related (study) material

    Large Language Models related (study) material

    This week I spoke at VoxxedDays CERN and Ticino (including a keynote). Received lots of great feedback but also several…

  • LLM Inference using 100% Modern Java ????

    LLM Inference using 100% Modern Java ????

    In the rapidly evolving world of (Gen)AI, Java developers now have powerful new (LLM Inference) tools at their…

    5 条评论
  • Basketball Game Analysis using an LLM

    Basketball Game Analysis using an LLM

    I asked OpenAI's ChatGPT and Google Gemini to analyze some game snapshots, and it's incredible how well they break down…

    5 条评论
  • The Power of Full Project Context #LLM

    The Power of Full Project Context #LLM

    I've tried integrating RAG into the DevoxxGenie plugin, but why limit myself to just some parts found through…

    14 条评论
  • Using LLM's to describe images

    Using LLM's to describe images

    I've already worked on face recognition many years ago, so the natural next step is to use a Large Language Model (LLM)…

    1 条评论
  • Devoxx Genie Plugin : an Update

    Devoxx Genie Plugin : an Update

    When I invited Anton Arhipov from JetBrains to present during the Devoxx Belgium 2023 keynote their early Beta AI…

    1 条评论
  • MLX on Apple silicon

    MLX on Apple silicon

    "MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research…

    1 条评论
  • Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    The current "AI Assistant" plugin for IntelliJ operates exclusively online, as it leverages a cloud-based GPT-4…

    6 条评论

社区洞察

其他会员也浏览了