SwiftyNote的封面图片
SwiftyNote

SwiftyNote

医院和医疗保健

New York,New York 74 位关注者

Securely capture the value in your clinical interactions

关于我们

SwiftyNote does your histories, assessments, and every documentation task that comes with being a clinician. That's more time and insights for you. For patients, SwiftyNote gives your healthcare provider a seamless recap of your visit without taking their focus away from you. That's better health outcomes. For healthcare administrators and practice owners, SwiftyNote understands, identifies, and justifies medical services. That's more revenue and reimbursement for your organization.

网站
www.swiftynote.com/?utm=li
所属行业
医院和医疗保健
规模
11-50 人
总部
New York,New York
类型
私人持股
创立
2023
领域
Artificial Intelligence、Medical Documentation、Medical Scribe、Healthcare Administration和Clinical IT

地点

  • 主要

    230 Park Ave

    18th Floor

    US,New York,New York,10169

    获取路线

SwiftyNote员工

动态

  • 查看SwiftyNote的组织主页

    74 位关注者

    Deepgram has been an excellent partner, providing secure and reliable voice transcription to us and our clients. Their team has been incredibly receptive to feedback and supportive throughout the entire integration process—even before we had any healthcare organizations onboard. We’re excited to continue working with them and see how this partnership evolves in the months and years ahead

  • 查看SwiftyNote的组织主页

    74 位关注者

    Spot on, Morgan. We have the world's knowledge at our command to overcome a whole host of challenges. But controlling its velocity and direction remains a clear challenge. That's exactly why observing frameworks like the one you've described is so important. Thanks to your insights, we're working to allow clinical leaders to wield these constraints and solve real problems. Excited to keep building a great product together!

    查看Morgan Jeffries的档案

    Neurologist & Medical Director for AI at Geisinger

    Foundation models are a general purpose technology, but getting the right behavior out of them often involves giving up some of that generality and constraining them in some way. I've been thinking about this a lot lately, so I came up with a list of all the ways I can think of to constrain foundation models and arranged them into four general categories: model constraints, input constraints, output constraints, and architectural constraints. I'd be curious to hear what others think about this. Model constraints are changes to the model weights intended to alter the behavior of the model. They're the most fundamental form of constraint. Every form of fine tuning can be thought of as a kind of model constraint. They tame the feral animal that is a pre-trained model, getting it to follow instructions, minimize toxicity, adapt to a specific domain, or induce any number of other characteristics. Inputs constraints shape model behavior by modifying its inputs. The most well-known form of input constraint is prompt engineering, but another common variety is input filtering. Input filtering allows LLM chat apps to block any requests that violate their ToS, but it can also be used for clever tricks like inserting secret instructions to prevent leaking of copyrighted material (https://lnkd.in/eans8_iD). Retrieval augmented generation (RAG) could also be considered a kind of input constraint since it involves injecting relevant documents into the prompt. Output constraints are applied directly to model outputs. Output filtering is just like input filtering but on the opposite end; it's often used for content moderation. Validation is a little like output filtering, but it's usually more grammatically oriented (e.g., checking if JSON matches a schema). Constrained decoding is a powerful technique that guarantees a model's output will conform to a specified format (see https://lnkd.in/gRT6iHFp). Architectural constraints don't control models so much as they limit the importance of any given model output. Techniques like self-consistency or mixture-of-agents that involve comparing foundation model outputs to one another could be included in this category. A less sexy form of architectural constraint is something I'll call BYOR (Bring Your Own Reasoning); instead of asking the model to "think step by step," you think step by step and then explicitly program those steps into your application, only calling the model when necessary. BYOR can make it much easier to catch and correct errors because you're only asking the model to do one thing at a time. The downsides to all of these are that they will increase the complexity of your code and may increase inference costs, but the alternative is to be a thin wrapper. Does this make sense to people? Have I made any glaring omissions? Karl Swanson Kevin Maloy, MD

  • 查看SwiftyNote的组织主页

    74 位关注者

    Great insight on the future of AI from one of our advisors Morgan Jeffries

    查看Morgan Jeffries的档案

    Neurologist & Medical Director for AI at Geisinger

    What if foundation models never reach AGI (artificial general intelligence)? Obviously I don't know for sure, but I'm deeply skeptical that they'll ever get there. But so what? Granted, AGI could be incredibly beneficial (or destructive, depending on whom you ask), but if foundation models don't reach that point, are they a bust? Probably not, but they might look that way at first. That's largely because they were first presented to the public as a general purpose tool. Ask the chatbot anything in your own words and it will give you an answer. Of course, we now know that many of the answers aren't very good, even if they look like they are. Getting reliable performance on most tasks requires prompt engineering (not your own words) or engaging with tools like document stores that aren't part of the core technology. As standalone consumer products, foundation models have some novel use cases, but they're pretty far from revolutionary. The thing is that consumers almost never use general purpose technologies directly. The internet is probably the most impactful tech breakthrough in my lifetime, but I don't "use the internet"; I use products that are enabled by the internet. Electricity is even more essential, but if I had to figure out what to do with it on my own, I would only succeed in burning my house down. They're ubiquitous, but they operate in the background. We only think about them when they stop working or when we have to pay utility bills. In fact, the more pervasive they've become, the less we've talked about them. Remember when every new company put ".com" in their name? Yeah, I feel old, too. Like those earlier technologies, foundation models can be used for many different purposes, but that also means we have to figure out what specific things to do with them. It's like staring at a blank page but on a much larger scale. This is sometimes referred to as capability overhang, and getting through it is hard work, requiring a mix of imagination and experimentation. We skipped those initial growing pains with foundation models. Instead of finding product-market fit, AI companies and their corporate partners gave us direct access to the raw tech or bolted it onto existing products. No wonder people are questioning its value. If this becomes a breakthrough technology or even a viable one, it will be because people build great products on top of it that leverage its unique capabilities. Most of these will be restricted to a single domain, like software engineering or medical documentation. They'll have slick UIs. Chat will be included in a minority, and in many cases, the language model will be hidden from the user entirely. If they become really pervasive, the tech itself will fade into the background. "AI" as part of a company's name will go the way of ".com". The good news is that whether foundation models are a breakthrough or a bust, we'll eventually stop talking about them.

  • 查看SwiftyNote的组织主页

    74 位关注者

    "By enhancing the sharing of knowledge within the clinician community and empowering both healthcare providers and patients to manage health more effectively, technology can bridge critical gaps in care." - Eve Cunningham MD MBA Don't skip this insightful interview with Dr. Cunningham.

    查看Anindita Santosa的档案

    Rheumatologist | Healthcare Innovator | Digital Health Transformation | Educator | Patient Advocate

    Glad to see Dr. Cunningham emphasizing the gradual adoption of technologies and the importance of skilled physicians in her recent interview. Integrating technology into processes isn't about replacing humans but making them safer and more efficient. Read more about Dr. Cunningham's insights here: https://lnkd.in/gp3Q8sg8 If I may humbly add: The best technologies seamlessly blend with our processes, often unnoticed. #AIinhealthcare

  • 查看SwiftyNote的组织主页

    74 位关注者

    Unlocking balance in a demanding medical career is no easy feat. When Omar A. began his residency, he faced overwhelming stress and burnout, struggling with the endless cycle of medical documentation. The burden was taking a toll on his passion for medicine just as it does for so many other physicians. Looking to reclaim his work-life balance, he decided to pivot his medical career. By switching to a new specialty, expanding his career obligations, and utilizing AI, Omar regained control over his time, reduced stress, and refocused on patient care. Read more about Dr. Abbas's inspiring story here: https://lnkd.in/g4TJP3hv #AIinHealthcare #MedicalDocumentation #HealthTech

    •  Dr. Omar Abbas SwiftyNote Customer Story
  • 查看SwiftyNote的组织主页

    74 位关注者

    Why settle for generic when you can have precision? Elevate your practice's efficiency and consistency by customizing templates with SwiftyNote. This guide shows you how to create and personalize templates that fit your specific needs, ensuring streamlined documentation every time. Explore the possibilities and take control of your documentation process: https://lnkd.in/ggSbjjDJ #AIinHealthcare #MedicalDocumentation #CustomTemplates

  • 查看SwiftyNote的组织主页

    74 位关注者

    AI is transforming clinical documentation by enhancing both accuracy and workflow efficiency. Leveraging AI can optimize medical practices and ensure compliance with legal and insurance standards. In this article, we dive into strategies that enhance accuracy and streamline workflows, providing practical insights for medical professionals. Join us in redefining the future of healthcare efficiency. https://lnkd.in/emxzAxTR #AIinHealthcare #MedicalDocumentation

    • 该图片无替代文字
  • 查看SwiftyNote的组织主页

    74 位关注者

    A study conducted at the University of Wisconsin Health showed that 70.3% of physicians reported burnout before being assigned a scribe. This number dropped to 51.4% after pairing with a scribe—a 26.8% decrease. Medical scribes not only reduce physician burnout but also enhance productivity, improve patient care, and ensure accurate medical records. In this article, we explore the different types of medical scribes, the benefits they bring, and how they help transform efficiency of healthcare practices. https://lnkd.in/gyKt6Abw #Healthcare #MedicalScribes #PatientCare

    • SwiftyNote Medical Scribes 101
  • 查看SwiftyNote的组织主页

    74 位关注者

    Residents are taught that patient care is their highest priority task, yet they spend large portions of their day writing notes. Residents rotate through different specialties and need to learn and re-learn each attending physician's note-writing style, a cumbersome and error-prone process. With SwiftyNote, residents can significantly reduce the time spent on note-writing, allowing for more efficient use of their day and a greater focus on patient care. To help residents avoid burnout, we're offering our Pro plan at a 60% discount on top of our Free plan. Read more about how SwiftyNote can alleviate the pain of administrative work: https://lnkd.in/gvrDX7XD #AIinHealthcare #PhysicianBurnout

    • 该图片无替代文字

相似主页

查看职位