alphaXiv

alphaXiv

研究服务

Open research discussion directly on top of arXiv.

关于我们

Open research discussion directly on top of arXiv.

网站
https://alphaxiv.org/
所属行业
研究服务
规模
2-10 人
类型
个体经营

alphaXiv员工

动态

  • alphaXiv转发了

    查看Adrian Chan的档案,图片

    Consulting in the UX of AI

    If you're curious to know how AI is being used to generate conversational personas for use helping content creators understand their audiences, this is a pretty interesting read (and impressive project): The LLM reads comments on YouTube videos of creators, extracts topics, feeds them to the LLM to generate personas, makes them available to the creator to prompt and talk to about what interests them in their videos. There's a lot of methodological simplification going on in deriving personas from topics, and from inferring stylistic, learning style, and other attributes from comments. (How to push LLMs to explore beyond cliches, w/o triggering psychedelic tendencies?) But it's an interesting attempt to get creators to think outside the (comment) box, at a minimum. And enlightening for those of us curious to see psychological and "authentic" depth from inside LLMs. https://lnkd.in/gR9EvG4v

    Discuss | Proxona: Leveraging LLM-Driven Personas to Enhance Creators' Understanding of Their Audience | alphaXiv

    Discuss | Proxona: Leveraging LLM-Driven Personas to Enhance Creators' Understanding of Their Audience | alphaXiv

    alphaxiv.org

  • alphaXiv转发了

    查看Zach Chandler的档案,图片

    Open Science at Stanford

    Cool tool alert! You can turn any preprint on arXiv into a conversation, using alphaXiv (a new project sponsored by arXiv Labs).? As great as preprints are for disseminating ideas rapidly and openly, one of the dangers is that they are not peer reviewed. If you are not already an expert, what are regular people (like me) supposed to make of these findings? Seeing discourse unfold between experts can open learning pathways for students, and the public. alphaXiv is a little different from some of the (excellent) options already in this space, as it combines elements of a preprint review system, with annotation, and stack-overflow like functionality.? One reason to love it is that it integrates with ORCID, so identity, and authority of community voices can be verified, leveraging a bedrock PID in the open science ecosystem. I know some of you will raise an eyebrow, asking why we are making yet another tool for this when really great, community-minded tools and toolmakers already exist, among them PREreview and Hypothesis.? It’s because alphaXiv is different. Part of the reason is that it’s as much about the community they are building as anything else: AI/ML researchers. This field changes so fast, by the time peer reviewed articles can get published they are mostly out of date. And there is a lot at stake with getting AI/ML research right, in terms of transparency and trustworthiness … That and their UX is killer. :) Two ways to try alphaXiv: 1.] On any arXiv paper, click to the "Code, Data, Media" tab and toggle the alphaXiv switch, which lets you know if there are any comments there yet. 2.] On any arXiv paper, go up to the address bar in your browser and change the URL from "arxiv.org" to "alphaxiv.org" (leaving everything else the same). Wait for magical things to happen. Cool right? The co-founders are two recent Stanford CS Master’s students whom I met when they were just rolling this out, Raj Palleti and Rehaan Ahmad. Please check out their work, login and connect your ORCID! https://www.alphaxiv.org/

    alphaXiv

    alphaXiv

    alphaxiv.org

  • alphaXiv转发了

    查看József Konczer的档案,图片

    Physicist, Senior AI/ML Research Engineer

    I read a paper on ?????????? ???????????? by authors from Google DeepMind and the University of Washington, and posted some comments on alphaXiv: https://lnkd.in/dgzrCTYQ I think fusing Neural Networks with Genetic algorithms is a promising direction, and this paper provides one concrete implementation. I am curious what future research will bring. Feel free to join the conversation. (Cover image cropped from the TeX source of the arXiv paper: https://lnkd.in/d3AdxVX6 )

    • 该图片无替代文字
  • alphaXiv转发了

    查看József Konczer的档案,图片

    Physicist, Senior AI/ML Research Engineer

    I "Reviewed" a paper on alphaXiv, so hopefully it will be more accessible for a general audience: https://lnkd.in/dUMWfdfj The paper: "Geometric Structure and Polynomial-time Algorithm of Game Equilibria" Gives a new effective approximative algorithm to find "Perfect (Nash) equilibria" in Multi-Agent Reinforcement Learning type of settings. (See details in the paper, and a more general Introduction and Summary in the "review fragments".) Feel free to comment and contribute on the platform. Illustrating images are from: - Wikipedia: https://lnkd.in/dMmHyxS2 - Google DeepMind: https://lnkd.in/dTssai9Q - The arXiv paper: https://lnkd.in/d2qai_Wf

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • alphaXiv转发了

    查看Samar Khanna的档案,图片

    Founding Research Team @ Stealth | CS @ Stanford

    ?? Excited to announce the release of our new paper ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts!?? Pre-training vision transformers (ViTs) from scratch on large datasets/domains is becoming too expensive.?? Current state-of-the-art methods require substantial compute resources—DinoV2, for example, requires 96 A100-80GB GPUs to pre-train! ?? Can we build foundation models for new domains without training from scratch? Yes! ? We present ExPLoRA, which enables inexpensive foundation models for new domains using very few parameters. Here’s the recipe????: 1. Initialize a ViT with natural-image pre-trained weights such as DinoV2 or MAE 2. Unfreeze only 1-2 ViT blocks? 3. Use LoRA for attention layers in all other blocks 4. Continue pre-training on the new domain with the same unsupervised objective (i.e. DinoV2 or MAE) ?? Our method achieves impressive results, outperforming state-of-the-art methods by over 8% on linear probing accuracy for satellite imagery and setting a new SoTA for the fMoW-RGB dataset with just parameter-efficient fine-tuning at 79.15% accuracy. ???? ?ExPLoRA also bridges the challenging domain shift from natural to multi-spectral satellite images and shows strong generalization across diverse domains such as medical, wildlife, and agricultural imagery from WILDS. We hope ExPLoRA accelerates the development of cost-effective foundation models across various domains, unlocking new possibilities in fields such as sustainability and medicine. ???? Big thanks to my co-authors: Medhanie Irgau, David Lobell, Stefano Ermon Links:? Project website: https://lnkd.in/gCUH73rX Twitter thread: https://lnkd.in/gFShry93 Arxiv: https://lnkd.in/gjsBMdwP ? alphaXiv: https://lnkd.in/ghNjWpXK

    ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts

    ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts

    samarkhanna.com

  • 查看alphaXiv的公司主页,图片

    497 位关注者

    For decades, researchers have asked whether a PTAS (polynomial-time approximation scheme) exists for game equilibria. In this new work, a PTAS for game equilibria is proposed, implying PPAD=FP. Is the long-standing belief that PPAD contains intractable problems overturned? Excited to have the authors discussing their work here! https://lnkd.in/gCnzWR7c

    Geometric Structure and Polynomial-time Algorithm of Game Equilibria | alphaXiv

    Geometric Structure and Polynomial-time Algorithm of Game Equilibria | alphaXiv

    alphaxiv.org

  • 查看alphaXiv的公司主页,图片

    497 位关注者

    To CoT or not to CoT? Which tasks does Chain-of-thought (CoT) prompting benefit the most? In this paper, the authors show that CoT provides strong performance benefits primarily on tasks involving math or logic, with much smaller gains on other types of tasks. CoT can thus be applied selectively and cost-effectively, and new paradigms that leverage intermediate computation should continue to be developed for LLM applications. The authors are here to discuss their work. Leave your thoughts here! https://lnkd.in/ghTtwU3W Zayne Sprague Fangcong Yin @Juan Diego R. Dongwei Jiang Manya Wadhwa Prasann Singhal @Xinyu Zhao Xi Ye Kyle Mahowald Greg Durrett

    To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning | alphaXiv

    To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning | alphaXiv

    alphaxiv.org

  • alphaXiv转发了

    查看Raj Palleti的档案,图片

    Stanford/alphaXiv, an open discussion forum for arXiv papers

    Excited to share that alphaXiv is hiring Founding Engineers! At alphaXiv, we’re building a platform that allows students, researchers, and industry experts to ask questions and exchange ideas directly on top of arXiv papers, the largest pre-print repository in the world. Over the last year, the journey has been nothing short of exhilarating. What started as a final project for CS193X has evolved into a platform with tens of thousands of researchers around the world sharing, discussing, and critiquing research papers. We’re also thrilled to share that we’ll soon begin a collaboration with arXiv as an arXiv Labs project! Along the way, we’ve been inspired by the rapid growth of our community through LinkedIn, Twitter, and Hacker News, with industry leaders such as Yann LeCun, Jack Dorsey, and many others expressing excitement for the project. Huge shoutout to my amazing team Rehaan Ahmad and Daniel Kim, as well as the invaluable advice from our advisors Sebastian Thrun, Eric Schmidt, Adriel Saporta, Archit S., Sanmi Koyejo, Michael Bernstein, Kawal Gandhi, and John Ousterhout. Our mission is to build a community platform that makes research more open, accessible, and interactive. To achieve our goal of helping researchers, academics, and enthusiasts, we've raised funding from investors and are now hiring founding engineers. If you’d like to join the team, please fill out this form! https://lnkd.in/guC7EzR9 X: @askalphaxiv

相似主页