Simbo AI leverages Intel Core and Xeon scalable processors with OpenVINO
Simbo AI leverages Intel Core and Xeon scalable processors with OpenVINO for the efficient use of hardware and scalability, to optimize deep learning speech model with low latency.
SimboAlphus is a Smart AI Digital Assistant for Doctors which creates hassle-free clinical documentation saving them up to 3 hours per day.
?Simbo understands all clinical terms needed for EMR documentation and also dumps structured info to help automate billing, research, automation and decision support.
The entire system has three components - Audio Capturing, Simbo AI platform on cloud, and EMR interface. Audio capturing device and UI are clients that connect to SIMBO (SaaS on cloud) using web sockets. WebSocket provides a secure-channel to stream Audio-Data and receive Structured-Data and Transcript.
领英推荐
As an example, to enter a medicine in the UI, the Doctor conventionally needs to click and select the space where medicines are typically entered. And then, having brought the medicine-section into focus or foreground, the Doctor can type (or select from menus) the medication details. In case of integration with SIMBO, Doctor can simply say “you need to take Crocin for five days”. SIMBO generates following Structured-Data: [{"intent": "med", "data": [{"name": "crocin", "days": "5"}]}]
SIMBO effectively replaces the keyboard-and-mouse interaction with a voice-interface. The Doctor can speak rather than type or move the mouse. The UI can continue to behave in the same manner as earlier except that now it gets structured-data from SIMBO instead of keystrokes and mouse-clicks.
Simbo.AI and Intel collaborated on AI model inference optimization based on Intel CPUs(i9)+ OpenVINO toolkit. Simbo.AI uses the deep learning model to create a simple, open, and ubiquitous speech recognition engine.
The objective of the collaboration is to provide efficient use of hardware and scalability to optimize deep learning speech model with low latency. The Model Optimizer produces an Intermediate Representation (IR) of the model, which can be inferred with OpenVINO Runtime. Post the conversion to OpenVINO IR format, the Inferencing Time boosted by 2.5X in numbers when benchmarked on Intel Hardware. This has helped in providing high performance models by further reducing the latency, making it cost efficient AI full stack solutions and one of the best in the market. Reduction has also helped us improving user experience in providing real-time output to providers.
Immediate joiner|worked as RESEARCH ANALYST|M.PHARMACY - PHARMACOLOGY|In vitro and In vivo studies|Toxicology|clinical research|Mvpi intern (medical devices) -AIIMS - IPC|Lab animal handling|data curation
2 年I am navya. I am b pharmacy graduate. I would like to know any openings in simbo ai.. thankyou