Hey, this issue covers updates on Meta's decision to halt its fake news filters, a transformative soft robotic armband for prosthetic hand control, how AI is reshaping healthcare delivery, and new AI agents in gaming. Plus, the latest AI research, including visual tokenizers for generation, efficient action tokenization, and innovations in language model alignment.
- Learnings from Scaling Visual Tokenizers for Reconstruction and Generation: This study investigates the scaling of auto-encoders, particularly through a new architecture called ViTok, revealing how different scaling strategies impact reconstruction and generative performance, and demonstrating competitive results in image and video generation tasks.
- FAST: Efficient Action Tokenization for Vision-Language-Action Models: The article presents Frequency-space Action Sequence Tokenization (FAST), a novel compression-based approach for tokenizing robot action signals, which effectively trains autoregressive vision-language action policies for complex dexterous tasks and improves training efficiency compared to standard methods.
- Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment: This article presents a novel causal reward modeling approach that addresses biases in large language models trained with reinforcement learning from human feedback by integrating causal inference to mitigate spurious correlations and ensure fair alignment with human preferences.
- Aligning Instruction Tuning with Pre-training: The proposed method AITP enhances instruction tuning in large language models by addressing dataset coverage shortfalls and aligning instruction-response pairs with pre-training distributions, leading to improved performance across various benchmarks.
If you enjoyed this newsletter, don't hesitate to share!