The Devoxx Belgium CFP results & AI
Last Friday we closed the CFP for Devoxx Belgium 2022
We received 700 proposals from 516 potential speakers! Luckily we limited the number of proposals per speaker to maximum 3 submissions, otherwise I'm convinced we would have again exceeded the 1K mark.
Before the review process starts I wanted to do some overview of the submitted talks and place everything in perspective. However after creating some tag clouds I was hacking away with GPT-3 to generate summaries, keywords etc. ??
Let's have a closer look...
BTW You can also help reviewing the submitted CFP talks by signing up @ https://bit.ly/review-talks
Proposals by Tracks
The obvious stat you want to see as the program chair of Devoxx Belgium are the proposals per track :
"Architecture", "Development Practices" and "Build & Deploy" track received the most proposals, which makes kind of sense because Devoxx Belgium welcomes mainly senior developers. I can imagine that's what more experienced engineers are tackling on a daily basis.
Of course I'm glad to see that the "Java" & "Server Side Java" tracks are on the 4th and 5th position. The DNA of Devoxx is still heavily linked to the Java eco-system, unfortunately "UI & UX" less so ??
Proposals by Sessions
Obviously the conference received the most proposals, 500+ proposals for about 90 slots (keep in mind that some sponsors also receive a speaking slot as part of their sponsor package, that's why you see 111 available slots).
I might consider dropping some Deep Dive schedule slots this year and replace them with some conference talks on day two of the event. This way we can welcome more speakers and provide more content for our Devoxxians. #TBD
Proposal Tags
Excited to see that the "Java" tag is still the most used (97 times) in all the submitted proposals, followed by Kubernetes.
What surprised me this year is to see both "Security" (3) and "Security best practices" (15) in the top-20 list, must be all these SecDevOps advocates pushing their passion ??
Nice to see the raising stars like Quarkus & GraalVM and general Cloud Native applications. And of course Spring & Spring Boot with JakartaEE also making the top-20, great!
Proposals by Company
Again Red Hat & IBM and Oracle in the top-3 followed by Google and Microsoft.
No real surprises there but what did disappoint me are the Oracle submissions are all from the developer advocates, no submissions from the actual core Java engineers ?? Probably having JavaOne the week after Devoxx Belgium didn't help and of course the anxiety (or company regulations) to travel "during" a pandemic did not help either.
-----
Now for the fun part... GPT-3 integration
We limit the proposal abstract to 1500 characters, but wouldn't it be nice to also have a summary when the speaker used every available character in his/her abstract?
OpenAI has a GPT-3 (Generative Pre-trained Transformer) service which allows you to summarise text using different language models that uses deep learning to produce human-like text.?
The most capable model (and most expensive) in the GPT-3 series is currently text-davinci-002. It can perform any task the other GPT-3 models can (complex intent, cause and effect, creative generation, search, summarisation for audience) and often with less context.
OpenAI provides a very simple REST interface where you provide the model you want to use, the text you want to process and your personal API token etc.
curl https://api.openai.com/v1/completions \
? -H "Content-Type: application/json" \
? -H "Authorization: Bearer $OPENAI_API_KEY" \
? -d '{
? "model": "text-davinci-002",
? "prompt": "Envoy is an open-source edge and service proxy that was designed for cloud-native applications. This hands-on introduction of Envoy proxy fundamentals is for anyone starting with their Envoy journey. \n\nLooking under the hood of the Envoy proxy and glancing at the configuration that powers it can overwhelm anyone. There’s a lot of stuff there! The Envoy documentation is comprehensive, but it can be challenging to navigate through it.\n\nIn this workshop, Peter will introduce the Envoy fundamentals and basic building blocks that make Envoy tick, answering questions such as What are listeners? What are filters? How do they work, and how should we configure them?\n\nAfter the theoretical introduction, we’ll put the concepts into practice and demonstrate how to configure traffic routing, outlier detection, and TLS, and how to get started with extending Envoy using Wasm.\n\nTl;dr",
? "temperature": 0.7,
? "max_tokens": 60,
? "top_p": 1,
? "frequency_penalty": 0,
? "presence_penalty": 0
}'
The "temperature" parameter allows the GPT-3 service to return different text responses if it's not set to 1.
For example let's take the proposal from Eitan Suez and Peter Jausovec?on Envoy (shown below) and lets see what the different GPT-3 models respond.
text-davinci-002 model :
text-curie-001 model :
text-babbage-001 model :
text-ada-001 model :
Pretty good, right?
Some AI bias does exist. For example when I generated a summary for the proposal below, the summary ended with "...logging in Python." instead of Java. Probably because the actual abstract didn't mention any programming language.
For a brief summary I was most pleased with text-curie-001 also because it's one of the fastest models and with the lowest cost. So I integrated that one in CFP.DEV and can now generate proposal summaries for long abstracts ??
One last thing...
OpenAI also has a beta service named "Codex" which you have probably already used indirectly via GitHub Copilot.
The Codex models are descendants from the GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.? The models are most capable in Python and proficient in over a dozen languages including Java, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
Codex allows you to select a programming language, provide normal text of the application you want to generated and you press the submit button. Let me show you this in action...
So instead of searching StackOverflow, you select your programming language and provide the text of the algorithm, function or application you want to generate and press Submit.
We are living in the future !!
BTW If someone from OpenAI is reading the post, please contact me if you're interested in speaking at this years Devoxx Belgium event ?? ????