KnoWhiz的封面图片
KnoWhiz

KnoWhiz

软件开发

Bellevue,WA 115 位关注者

Transform Materials into Structured Learning by AI

关于我们

KnoWhiz is an AI-powered learning tool to revolutionize autonomous learning. Users input learning objectives or textual materials, and KnoWhiz can auto-generate structured learning, and adjust learning content from cumulative outcomes like quizzes and tests

网站
https://knowhiz.us/
所属行业
软件开发
规模
2-10 人
总部
Bellevue,WA
类型
上市公司
创立
2023

地点

KnoWhiz员工

动态

  • 查看KnoWhiz的组织主页

    115 位关注者

    Check out the new version of DeepTutor: https://lnkd.in/gGd_2daJ

    查看DeepTutor的组织主页

    4 位关注者

    DeepTutor v1.1 is Live!! ????????? We’re excited to roll out DeepTutor 1.1, bringing key improvements based on your feedback! Here’s what’s new: 1. Lite Mode – A faster, streamlined option for users who want a quick overview of a paper without waiting for in-depth analysis. 2. Improved Speed – We’ve reduced file processing latency while maintaining high-quality responses. 3. Signup & Verification Fixes – Users who previously faced issues signing up or receiving verification emails should now have a seamless experience. 4. Updated Landing Page – We’ve refreshed our homepage with more relevant and up-to-date content. 5. Bug Fixes & UX Enhancements – Various improvements to ensure a smoother, more intuitive experience. Thank you for your continued support! Try out the new features at https://lnkd.in/gGd_2daJ and let us know what you think!

  • 查看KnoWhiz的组织主页

    115 位关注者

    ?? We Just Launched DeepTutor on Product Hunt! ?? Hello everyone, We’re beyond excited to share that DeepTutor is officially launched on Product Hunt! ?? Our team has poured tremendous effort?into building an AI-powered platform that reimagines the way you interact with PDF documents. With contextual Q&A, advanced graph insights, and seamless source highlighting, DeepTutor is here to transform your learning experience. ?? But this is just the beginning! We need you on this journey. Your feedback, insights, and support are what will shape DeepTutor into the best tool it can be. ?? Check us out on Product Hunt: https://lnkd.in/ehnQZNDp ?? Visit our website: https://lnkd.in/e933pFm9 ?? Ways to support us: ?? Drop an upvote if you believe in what we're building ?? Leave a comment – we’d love to hear your thoughts! ? Got a feature in mind? Tell us in the comments what you'd love to see in DeepTutor! ?? Share with anyone who might find DeepTutor useful To our early users and supporters—thank you! We couldn’t have come this far without you. We’re thrilled to have you alongside us. Here’s to revolutionizing research together! ???

  • 查看KnoWhiz的组织主页

    115 位关注者

    KnoWhiz 1.2 is Here! I am excited to share that we have launched KnoWhiz 1.2! This latest update represents a significant leap forward in our mission to innovate the way you learn and retain knowledge. Our team has worked tirelessly to enhance the user experience and incorporate even more powerful features that empower users to achieve their learning goals effectively. Thank you to everyone who has supported us on this journey. We cannot wait for you to explore the new features and unlock your full potential with KnoWhiz!

  • 查看KnoWhiz的组织主页

    115 位关注者

    ? Check out this "Recommendation System" course generated by KnoWhiz: https://lnkd.in/g6-CzDEv. ? ?? Introduction to Recommendation Systems: Understanding the fundamentals of how recommendation systems work, including collaborative filtering, content-based filtering, and hybrid approaches. This course will cover the algorithms and techniques used to create personalized recommendations in various applications. ?? ?? Get 3 months free trail from https://www.knowhiz.us/ and generate flashcards, self-test or ask questions for any topic you want to review! ??

  • KnoWhiz转发了

    o1 has sparked tons of ideas for applying LLMs to reasoning problems in science and math, but one of the most interesting applications IMO is prompt optimization… TL;DR: Prompt engineering is a black box even for recent frontier models–slight changes in prompts lead to big differences. Automatic prompt engineering (i.e., using an LLM to optimize a prompt) is one of the best tools for solving this black box, but it requires an LLM with very good reasoning capabilities. The proposal of o1–and its ability to leverage increased inference time compute for better reasoning–unlocks new potential for automatic prompt engineering. What is automatic prompt optimization? There are several papers that have been published on using LLMs to propose better / improved prompts; e.g., APE [1] and OPRO [2]. I’m referring to these approaches as automatic prompt optimization techniques. The underlying idea here is to use an LLM to refine prompts that are sent to another LLM. How does this work? Most of papers on automatic prompt optimization follow a similar approach: 1. Construct a “meta prompt” that asks the LLM to write a new prompt based on prior context (i.e., previous prompts and their performance metrics). 2. Generate new prompts with an “optimizer” LLM. 3. Evaluate these prompts using another LLM, producing an objective value / score. 4. Select prompts with the best scores. 5. Repeat steps 1-4 until we can’t find a better prompt. Notably, the optimizer LLM and the LLM used for evaluation do not need to be the same! We could use o1 as an optimizer that finds better prompts for other LLMs. Practical details. To make this approach work well, we need to include the correct information in our meta prompt In [2], authors propose including i) a description of the task, ii) few-shot examples from the task iii) prior prompts, iv) the performance of prior prompts, and v) general constraints for the prompt. Given the correct context, we can generate high-performing prompts pretty easily. Does this work? Interestingly, LLMs seem to be very good at inferring new / better prompts from prior context. For both APE and OPRO, the automatic prompt engineering system is able to discover new prompts that outperform those written by humans. Plus, the prompts produced by these systems can reveal interesting tricks / takeaways for how to prompt certain models properly. These takeaways even generalize to other tasks in many cases. How does this relate to o1? The performance of automatic prompt engineering is heavily dependent upon the optimizer LLM’s reasoning capabilities. This LLM must be able to ingest prior prompt information and objective values, then infer new prompts that will perform well. This is a complex reasoning problem. As such, spending more on compute at inference time could potentially lead the LLM to discover more and better patterns for successful prompting.

    • 该图片无替代文字

关联主页

相似主页

查看职位