Enterprises face increasing challenges in bringing AI to production while maintaining security, compliance, and scalability. Many teams work with unstructured data like computer vision, audio, text, and LLMs, requiring a solution that operates securely on-premise. DagsHub is now integrated with Red Hat OpenShift and OpenShift AI, providing an end-to-end machine learning platform that covers: ? Dataset curation and annotation ? Experiment tracking and model management ? Secure, scalable MLOps workflows With this integration, AI teams can develop, iterate, and deploy models within their own infrastructure without compromising security or performance. Read the full announcement: https://lnkd.in/dPwZffZJ
关于我们
DagsHub allows you to curate and annotate multimodal datasets, track experiments, and manage models on a single platform. With DagsHub you can transform petabytes of vision, audio, and LLM data into golden datasets to improve your AI models.
- 网站
-
https://dagshub.com
DagsHub的外部链接
- 所属行业
- 软件开发
- 规模
- 11-50 人
- 总部
- San Francisco
- 类型
- 私人持股
- 领域
- MLOps、Data Science、Machine Learning、DataOps、Data Labeling和AI platform
产品
DagsHub
数据科学与机器学习平台
DagsHub is where people build data science projects Leverage popular open-source tools to version datasets & models, track experiments, label data, and visualize results --- Get started: https://dagshub.com/docs Join our community: https://discord.com/invite/9gU36Y6
地点
-
主要
US,San Francisco
DagsHub员工
动态
-
DagsHub转发了
“DagsHub The GitHub for Data Scientists ??” Data science and machine learning projects often feel like herding cats—datasets, models, and experiments all over the place! Enter DAGsHub, the platform that makes collaboration in ML as seamless as GitHub does for code. What makes DagsHub a game-changer? ? Version Control for Everything: Track your datasets, models, and even ML pipelines. ? Experiment Tracking: Reproduce results with integrated experiment logs. ? Collaboration-Friendly: Collaborate with your team like a pro, thanks to its Git-based backend. ? Supports Open-Source: Built for teams that love open data and transparency. In short, DAGsHub is where data scientists meet developers for version-controlled, reproducible AI. ?? If you’re tired of messy ML workflows, maybe it’s time to give DAGsHub a try. What’s your favorite collaboration tool for data projects? Let’s discuss! #DAGsHub #MachineLearning #DataScience #ArtificialIntelligence #MLWorkflows #MLOps #ExperimentTracking #DataVersionControl #MLCollaboration #GitForData #OpenSource #TechInnovation #DataEngineering #AICommunity
-
What a RAG system looks like from the inside
What Does a RAG (Retrieval-Augmented Generation) System Look Like from the Inside? RAG frameworks combine the strengths of large language models (LLMs) with external knowledge bases. By combining what #LLMs have learned during their training with real-time information from external sources, RAG greatly improves what these models can do. This approach enables models to give more accurate and current responses by using both their learned knowledge and new external information, leading to the development of diverse RAG applications and three distinct RAG paradigms: 1. Naive RAG: Combines model text with simple data retrieval. 2. Advanced RAG: Deeply integrates retrieved data for precise responses. 3. Modular RAG: Uses specialized modules for flexible response generation. At DagsHub, we enable the development and evaluation of #RAG systems. Our platform provides tools for creating high-quality #datasets, integrating human expertise in the evaluation process, and tracking prompt engineering efforts.
-
-
Object detection is going to be pretty much everywhere
If you didn’t already know, nearly every action you take in the future will leverage #objectdetection technology. When you drive to the supermarket, your autonomous car will identify traffic signs. Inside the supermarket, cameras will track your behavior to analyze customer patterns and product placement. Meanwhile, at home, your security camera will discern whether there’s a potential threat approaching. This technology will be integral to our #security, economy, and daily lives. Accuracy and speed in object detection are crucial for automating these tasks. Whether you're a data engineer, an enthusiast, or just curious, these models will play a role in your life. ???????? ?????? ?????? ?????? ???????????? ?????? ????????: ??) ???????? is a popular object detection model that processes images in a single stage, dividing them into cells to identify objects and their probabilities. ??) ????????????????????????, optimizes model depth, width, and resolution for scalability, enhancing performance within memory and FLOPs limits. ??) ??????????????????'?? "focused loss" function reduces class imbalance by assigning lower weights to easy negatives, improving focus on positive and challenging examples. ??) ???????????? ??-??????'?? Region of Interest (ROI) pooling technique segments images for classification, requiring fewer training images. ??) ???????? ??-?????? builds on Faster R-CNN by adding instance segmentation, using FPN and ROIAlign for precise pixel-level object detection. DagsHub accelerates your computer vision projects from model selection to deployment, offering end-to-end solutions for object detection and staying ahead in #deeplearning.
-
-
DagsHub转发了
LLMs are versatile tools that require specialized training to reach their full potential. Fine-tuning is the process of adapting a general-purpose LLM to excel at specific tasks or within particular domains. Similar to customizing a recipe with unique spices, fine-tuning infuses an LLM with the knowledge and abilities necessary to meet specific organizational needs. Without fine-tuning, LLMs function as broad knowledge bases, often lacking the depth or focus required for practical applications. This can result in irrelevant, inaccurate, or even harmful outputs. In business settings where precision and reliability are paramount, the consequences of an unrefined #LLM can be severe. DagsHub provides a centralized workspace for #datascientists to manage their entire project lifecycle, from #data to models, while fostering open collaboration.
-
DagsHub转发了
For quite a long time I have been focused on writing a lengthy and detailed article on different approaches to develop a robust ML model one of which is "Continual learning or CL". The idea of CL arises from the fact as to how humans are capable of learning complex matters while preserving the old information. We also tend to leverage the these old information to learn new information quickly. We are adaptable. But it is not the same with the ML systems. They have to be retrained again on a new set of data. This of course is time-consuming and potentially expensive. In?AI?continual learning?is the process of injecting or adding new information to a trained model while preserving the old information, mimicking human cognitive processes. I got an opportunity to write this article on CL with DagsHub along with Micha? Oleszak and Daniel Tannor where we explained the various elements involved in CL -- types, approaches, and challenges -- as well as provided a practical approach to learning CL in PyTorch. You will learn a lot of valuable insights from this article. You can find the article link in the comment below.
-
-
DagsHub转发了
Why are transformers so good at understanding language? The answer is Self-Attention. Self-Attention lets transformers focus on different parts of the inputs all at once instead of one piece at a time. It's kind of like giving the model the ability to understand the big picture by mapping the relationships between all of the little pieces within the data. And this is how they pick up on complex patterns and connections. One cool detail is that Self-Attention actually lets the model learn about the order and the spacing of the words itself without providing it explicitly. And that's part of why it's so powerful. So in other words, Self-Attention is not just another tool. It's actually what unlocks a lot of the power of modern transformers in LLMs.
-
DagsHub转发了
We've enhanced our experiment tracking to let you see your model's predictions and outputs as they evolve during training. Visual insight into model behavior is critical, yet often overlooked in ML workflows. So we're introducing an integrated experiment artifacts view on DagsHub. Key benefits: - Real-time visual feedback: Watch your model learn through images, audio, and even 3D visualizations - Comprehensive artifact support: View text, model files, and even CSV files alongside metrics - Seamless integration: Works with the OSS MLflow API you're already using - Coming soon - HTML, Notebooks, artifact diffing and more. How it works: 1. Use mlflow.log_artifacts() to attach files to your experiment 2. Go to the experiments tab in your DagsHub repo 3. Visualize artifacts directly in the experiment view, no context-switching required. As ML practitioners, we know that numbers alone don't tell the whole story. Now you can literally see your model's progress, catching potential issues early and gaining deeper insights. What other visual tools would enhance your ML workflow? Share your thoughts below! Thanks Tal for building, Anna for design, and the entire team for shipping ????. Also, thanks MLflow for being awesome!
-
Check out this awesome post about image embeddings benefits, industry use cases and best practices. Thanks Ignacio Peletier Ribera
Are you interested in learning about Image Embeddings? I just published an article in DagsHub blog! Check it out to dive into their benefits, industry use cases and best practices! https://lnkd.in/drdZ5vn7 #Embeddings #DeepLearning #ComputerVision
-
We're very lucky to be working with the top data scientists at MACSO Check out the full case study
I’m really proud to share our amazing partnership results with MACSO. Their ambitious ML team led by Hwan is doing mind-blowing work at the intersection of AI, edge computing, AgTech, and more. From pinpointing sources of air pollution to revolutionizing livestock monitoring, MACSO is proving that huge breakthroughs can happen. I'm proud that DagsHub gets to partner with them on this journey of innovation. By providing intuitive tools for experiment tracking, data management, and seamless collaboration, we've been able to help MACSO: ?? Increase experiment speed by 30% ?? Reduce data prep time by 50% ?? Boost team collaboration efficiency by 30% As Hwan put it: "DagsHub has been a game-changer for us. It not only streamlined our ML workflows but also ignited our team's creative potential, allowing us to experiment fearlessly and innovate rapidly. DagsHub is not just a tool; it's a catalyst for transformation in ML development.” From all of us at DagsHub, we're honored to lock arms ?? with the brilliant minds at MACSO. Their ability to reimagine what's possible in AI, AgTech, and edge computing is amazing. Check out the comments for the full case study #machinelearning #mlops #edgeai #agritech #datascience #startup
-