Using Wasm requires bindings to cross between host and guest. Working these by hand is fine for 1 language, but what about 2, or 5, or 10? Then you get into the world of automated codegen and have to ask: should code be generated to feel more natural to the language or it can be a more direct mapping to the Component Model types. Neither option fits 100% of use cases so it is up to the tool authors to decide which makes the most sense. Developers may struggle with non-idiomatic patterns, leading to verbose, less maintainable code. Using established conventions makes the code feel more familiar, but does require some additional effort to implement. We decided to take the idiomatic path to minimize friction and make it easier for our team so we know what to expect when moving around the codebase.
Arcjet的动态
最相关的动态
-
???????? ???? ?????????? ????????????() / ?????????? ???? ???????????? ??????????????????? ?? Had two people (including the person below who blocked me for my response) tell me using ToList() was wrong in my post from a few days ago. Without additional context it's impossible to say there's anything wrong with using ToList/Async here or in general. The vast majority of EF doc code samples and snippets use ToList and it appears super regularly in blogs, tutorials, videos etc. and of course most importantly in ?????????????? (?????? ????????????????????) ???????????????????? ???????? too. ToList / ToListAsync causes Entity Framework to send the query to the DB and materialize the results, it's a super important part of the EF flow. Are there occasions when using ToList is 'wrong' or better phrased ... inappropriate? Of course ?? 1. ToList / ToListAsync puts all results for a query into memory. For large result sets where mem consumption could be an issue and your app is able to process one row at a time streaming with ForEach or AsEnumerable may be the better approach. 2. We should also avoid using ToList (or ToArray) if we intend to use another LINQ operator on the result as this will needlessly buffer all results into memory prematurely, e.g. ?????? ???????????????????????????? = ??????????????????.????????????.????????????() .??????????(?? => ??.?????????????????????? > 1000); // ?????????????? ????-???????????? In the above case, calling ToList() returns all orders into memory with the filtering only done on the client side. Depending on the size of the Orders table of course this could have major performance implications. So ... know your context, it really does always depend.
要查看或添加评论,请登录
-
-
Blanket banning of programming features is kind of odd. It reminds me a lot of the c/c++ days with all the various memory allocation routines. OpenSSL did this initially and it lead to quite a few bugs because people were forced to use memory allocators that did not quite fit the intended purpose. PS: I am #OpenToWork
???????? ???? ?????????? ????????????() / ?????????? ???? ???????????? ??????????????????? ?? Had two people (including the person below who blocked me for my response) tell me using ToList() was wrong in my post from a few days ago. Without additional context it's impossible to say there's anything wrong with using ToList/Async here or in general. The vast majority of EF doc code samples and snippets use ToList and it appears super regularly in blogs, tutorials, videos etc. and of course most importantly in ?????????????? (?????? ????????????????????) ???????????????????? ???????? too. ToList / ToListAsync causes Entity Framework to send the query to the DB and materialize the results, it's a super important part of the EF flow. Are there occasions when using ToList is 'wrong' or better phrased ... inappropriate? Of course ?? 1. ToList / ToListAsync puts all results for a query into memory. For large result sets where mem consumption could be an issue and your app is able to process one row at a time streaming with ForEach or AsEnumerable may be the better approach. 2. We should also avoid using ToList (or ToArray) if we intend to use another LINQ operator on the result as this will needlessly buffer all results into memory prematurely, e.g. ?????? ???????????????????????????? = ??????????????????.????????????.????????????() .??????????(?? => ??.?????????????????????? > 1000); // ?????????????? ????-???????????? In the above case, calling ToList() returns all orders into memory with the filtering only done on the client side. Depending on the size of the Orders table of course this could have major performance implications. So ... know your context, it really does always depend.
要查看或添加评论,请登录
-
-
This is such a fantastic and timely post! ?? LangChain has indeed revolutionized the way developers build RAG systems, making it more accessible and efficient for a wide range of applications. Your detailed insights and hands-on tutorial are a goldmine for anyone looking to get started with this framework. ?? LangChain’s modular design is a true game-changer. By offering standardized abstractions for critical components like document loaders, embeddings, vector stores, and LLM interactions, it simplifies the entire development process while maintaining flexibility for customization. ???? The focus on automating essential tasks like chunking, retrieval, and prompt management is another standout feature. These utilities allow developers to focus on the application logic, ensuring faster prototyping and deployment of RAG systems without being bogged down by infrastructure challenges. ??? Its active ecosystem and extensive documentation are invaluable. Pre-built chains, real-world examples, and seamless integration with popular tools like Pinecone, Weaviate, and OpenAI make LangChain a go-to choice for building scalable and efficient RAG applications. ???? I also appreciate your emphasis on LangChain’s first-mover advantage. By setting the standard for LLM frameworks, LangChain has become the backbone of many innovative RAG projects, enabling companies to explore and implement cutting-edge solutions. ???? Thank you for sharing your tutorial and video resources! They offer a perfect blend of theory and practice, making it easier for developers at all levels to dive into LangChain. Excited to learn more and start building! ?? #LangChain #RAGSystems #AIInnovation #LLMFrameworks #RetrievalAugmentedGeneration #TechLeadership
GenAI Evangelist | Developer Advocate | 40k Newsletter Subscribers | Tech Content Creator | Empowering AI/ML/Data Startups ??
Let's understand how to build RAG systems using LangChain. My step-by-step hands-on tutorial: https://lnkd.in/gYYDdXwH Well, LangChain has an obvious first mover advantage in the LLM frameworks ecosystem. LangChain took off when RAG started to become the talk of the AI town. Many companies started experimenting with LangChain to build their RAG applications and systems. LangChain became the first choice of many as it started simplifying the process of building RAG applications by providing modular abstractions for common components like document loaders, embeddings, vector stores, and LLM interactions. Its standardized interfaces enable easy swapping of components, while built-in utilities handle chunking, retrieval, and prompt management. The framework's integration with popular tools and databases streamlines development, letting developers focus on application logic rather than infrastructure. LangChain's active ecosystem and documentation also accelerate development through pre-built chains and examples. Learn more in-depth about LangChain in my recent video: https://lnkd.in/gYYDdXwH
要查看或添加评论,请登录
-
We developers are drawn to complexity like bugs to light. For example, object mapping. The complex solution? A mapping library. Problems: 1. Debugging complexity ? Mapping code is a black box, errors could be difficult to trace. 2. Performance penalty ? Reflection introduces performance overhead. 3. Risks of adding yet another library ? One more dependency you need to worry about. 4. Run-time errors ? The code compiles. But it breaks when you run it. The simplest solution? Constructor in a DTO (data transfer object). Advantages: 1. Simplicity ? Constructors are simple to write and easy to debug through. Tip: use AI to assist you with the mapping code. 2. Performance ? There is no performance hit while executing a constructor. 3. No external dependencies ? Less maintenance burden. 4. Type safety ? No unexpected run-time errors since everything will be checked during compile time. Mapping libraries are a good choice for certain use cases. But I'll take simplicity any day of the week.
要查看或添加评论,请登录
-
-
Let's understand how to build RAG systems using LangChain. My step-by-step hands-on tutorial: https://lnkd.in/gYYDdXwH Well, LangChain has an obvious first mover advantage in the LLM frameworks ecosystem. LangChain took off when RAG started to become the talk of the AI town. Many companies started experimenting with LangChain to build their RAG applications and systems. LangChain became the first choice of many as it started simplifying the process of building RAG applications by providing modular abstractions for common components like document loaders, embeddings, vector stores, and LLM interactions. Its standardized interfaces enable easy swapping of components, while built-in utilities handle chunking, retrieval, and prompt management. The framework's integration with popular tools and databases streamlines development, letting developers focus on application logic rather than infrastructure. LangChain's active ecosystem and documentation also accelerate development through pre-built chains and examples. Learn more in-depth about LangChain in my recent video: https://lnkd.in/gYYDdXwH
要查看或添加评论,请登录
-
How to build RAG systems using LangChain.
GenAI Evangelist | Developer Advocate | 40k Newsletter Subscribers | Tech Content Creator | Empowering AI/ML/Data Startups ??
Let's understand how to build RAG systems using LangChain. My step-by-step hands-on tutorial: https://lnkd.in/gYYDdXwH Well, LangChain has an obvious first mover advantage in the LLM frameworks ecosystem. LangChain took off when RAG started to become the talk of the AI town. Many companies started experimenting with LangChain to build their RAG applications and systems. LangChain became the first choice of many as it started simplifying the process of building RAG applications by providing modular abstractions for common components like document loaders, embeddings, vector stores, and LLM interactions. Its standardized interfaces enable easy swapping of components, while built-in utilities handle chunking, retrieval, and prompt management. The framework's integration with popular tools and databases streamlines development, letting developers focus on application logic rather than infrastructure. LangChain's active ecosystem and documentation also accelerate development through pre-built chains and examples. Learn more in-depth about LangChain in my recent video: https://lnkd.in/gYYDdXwH
要查看或添加评论,请登录
-
New Blog: Why IActorRef.Tell Doesn't Return a Task Occasionally, Akka .NET is accused of not being "idiomatic" .NET. Most common source of that comes from our relationship with the Task and Parallelism Library (TPL) - namely, that main method for interacting with actors returns void: IActorRef.Tell IActorRef.Tell is a void method and it's asynchronous - "it'd be much more idiomatic if this returned a Task!" In this post I explain why we've kept the design as-is and why not returning a Task here actually works for the end-users' benefit https://lnkd.in/gCUtfGF8
要查看或添加评论,请登录
-
Echo Embeddings: Enhancing Token Context in Autoregressive Models Echo Embeddings is an innovative strategy designed to enhance token embeddings in autoregressive models. These models often struggle with incorporating information from tokens appearing later in the input. To address this limitation, Echo Embeddings repeats the input twice during the embedding process. By doing so, it ensures that token embeddings capture relevant context. The method has demonstrated strong performance on tasks like MTEB (Machine Translation Evaluation Benchmark) and is compatible with various other techniques for improving embedding models. #EchoEmbeddings #TokenContext #AutoregressiveModels #EmbeddingProcess #ContextEnhancement #MTEB #MachineTranslation #Technology #Innovation
要查看或添加评论,请登录
-
Monoliths are not just at the application level. In the PDF attached below, you can see an 11% improvement from just refactoring a large class into multiple smaller classes (which I call 'Type_Safe__Step__*.py'). What is really interesting about this case is that I was not expecting any performance improvement at all; if anything, I would not have been surprised if there was a small overhead. You can read more about this 'mystery' here https://lnkd.in/etMWa5vX and the kind of performance tests I'm writing here https://lnkd.in/eUmkfk7G The next step is the more interesting one: the actual refactoring of the code so that I can find out the root cause of the performance issues (mentioned in detail here https://lnkd.in/eTePbaCV) What was key to making even this initial refactoring was the 94% to 100% code coverage that the OSBot_Utils Type_Safe classes have, which gives me great confidence to move code around, knowing that if all my tests pass, there have been no side effects. By the way, these code coverage values don't do justice to the current OSBot_Utils Type_Safe's test suite, because there are tons of edge cases, especially lots of 'bug' and 'regression' tests, which cover just about all possible use cases (for more on this 'start with a bug test' philosophy see https://lnkd.in/eWKDkzBU) ---- For more examples on using GenAI to improve dev's tech documentation, see this post series: https://lnkd.in/eXVFV9hE
要查看或添加评论,请登录
-
[Code generation with Self-Infilling] Instead of using left-to-right decoding, this work proposes a self-infilling operation for training, improving the performance over multiple benchmarks.
Checking out our recent work accepted by #ICML2024: ? Self-Infilling Code Generation Evolving autoregressive LLMs into a non-monotonic?decoding process, and letting LLMs write codes in a more natural and human-like way. ? Paper: https://lnkd.in/gQfSdqN7 ? Code: https://lnkd.in/ghZJ5bP3 ? InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks As part of our InfiAgent (LLM-based agent) framework, we are releasing the data analysis agent that solves complex data analysis tasks by interacting with an execution sandbox, together with a comprehensive and high-quality benchmark. ? Paper: https://lnkd.in/gcQ6se8W ? Project: https://lnkd.in/gjXUgB4W ? Code: https://lnkd.in/gAuJGqxH Congrats to all of our collaborators for their tremendous efforts and the incredible work they've produced! It's truly inspiring to see the dedication and talent of everyone involved. ??????
要查看或添加评论,请登录
Read more: https://blog.arcjet.com/the-wasm-component-model-and-idiomatic-codegen/