From the latest technology news to Super Bowl commercials (aka Matthew McConaughey's stupid dinner in the rain), there’s a lot of hype right now around AI agents. Unlike chatbots, these autonomous AI systems can perform complex tasks across platforms and promise to revolutionize industries—including media. I’m excited about agentic AI... but really worried at the same time. You should be too. Here's why. ?? The Opportunity: AI agents can streamline workflows, from managing commercial pre-empts to real-time newsroom fact-checking. Productivity will soar. ?? The Concern: If AI takes over too much, what happens to human workers? Tech leaders like OpenAI's Sam Altman are already exploring universal basic income as a solution to keep sections of humanity from destabilizing. What does Altman know that we don't? Media companies must also prepare for this AI-dominated future ourselves. How will agentic workflows change journalism, production and content creation? How can media organizations adapt to remain relevant and sustainable? Read more in today's featured article on TVNewsCheck. Thank you Rick Ducey, Robert Caulk, and Kurt Christopher for sharing your insights! Article link below.?? ARTICLE LINK: https://lnkd.in/eC27rb7H #AIagents #AI #basicincome #universalbasicincome #SamAltman, #OpenResearch #jobdisplacement #AItechnology #mediaindustry #billionaires #workforceautomation #economicimpactofAI #societalchange, #OpenAI #ChatGPT
Emergent Methods
软件开发
Arvada,Colorado 567 位关注者
Applied machine learning for real-time adaptive modeling.
关于我们
Emergent Methods, established in 2022, is known as one of the top knowledge structuring applied research and development laboratories in America. Headed by Dr. Robert Caulk, a prolific academic in computer science with over 30 publications and thousands of citations, Emergent Methods has a reputation for its dedication to quality, trust, transparency, bias mitigation, and open-source software. During 2024, it released a top 10 most downloaded model on HuggingFace in 2024, GLiNER News, which eclipsed 8.5 million downloads and remains the best and most downloaded entity extraction model in the world. Emergent Method’s research extends into scalable compute, where it released Flowdapt open-source, a cluster orchestration software designed for real-time aggregation, processing, and storage of information. Flowdapt is the core engine of Emergent Methods’ flagship data source, AskNews, which provides premium news data to over 2,000 applications around the world - including top University laboratories working on misinformation detection, geopolitical forecasting and risk, and much more. Finally, our team’s experience extends into advanced user interfaces and dashboards, evidenced by Newsplunker, our Analyst which is actively helping analysts explore and interact with the largest news knowledge graph on the planet.
- 网站
-
https://emergentmethods.ai
Emergent Methods的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 总部
- Arvada,Colorado
- 类型
- 自有
- 创立
- 2022
- 领域
- machine learning、adaptive modeling、generalized frameworks和open-source
地点
-
主要
US,Colorado,Arvada,80005
Emergent Methods员工
动态
-
The EU is cracking down on DeepSeek's Chinese Chat platform. But don't forget that DeepSeek's model can be (and is) hosted outside of China as well (e.g. Fireworks.ai, HuggingFace) - where "espionage" on the data would have nothing to do with China, and more to do with the country/company where the model is being hosted. Link to full story in the comments ?? #deepseek #crackdown #ban #AIban #chinese #china #ccp
-
-
How many organizations have fought with trump during the past 10 days? The Trump admin is really stirring the pot right now...they are fighting with at least 5 different organizations. Let's go Newsplunking, Share-Link in the comments ?? #news #llm #investigative #analytical #newsplunker #ainews #transparency #citation #trump #department #education #congress
-
-
Voir's Super Bowl LIX Forecast is in, my apologies to any of the Eagles fans out there ?? share-link in the comments ?? #superbowl #prediction #forecast #chiefs #eagles #philadelphia #kansascity
-
-
Have you deleted DeepSeek (yet)? In just the last few weeks, millions of Americans downloaded the new AI app DeepSeek, including journalists. Now experts are advising caution, especially for newsrooms that might be testing and experimenting with the app’s features using personal devices. Key Concerns: ? Data privacy risks under Chinese law ?? ???? ?? ? Potential for information manipulation (yikes!) ?? ? Extensive data collection, including keystroke patterns ????? ? Recent security breaches exposing user data ?? So, what should media companies do? My latest article for #TVNewsCheck includes recommendations from industry experts specifically for news-based organizations. ?????? https://lnkd.in/eWu7sWEG Special thanks to Ian Eck from Pickaxe, Matt Pearl from Center for Strategic and International Studies (CSIS), Trevor Wiseman from Griffin Media, Dr. Elin T?rnquist from Emergent Methods, and Joshua Brandau from Nota for sharing their insights for our industry. How many people in your newsroom still have the DeepSeek app installed? Has your IT dept blocked the URL? Or... is DeepSeek R1 not a major concern for your company? #AIinMedia #DataPrivacy #JournalismEthics #MediaTech #InformationSecurity #AIEthics #MediaLeadership #TechNews #DigitalSafety #AIJournalism #DeepSeek #ChinaAI #DataSecurity #TVNewsCheck #DigitalNewsTools #AIJournalismRisks #MediaTechPolicy #InformationWarfare #AIBiasInNews #MediaDataProtection #FutureOfNewsrooms #AIDisruptionInMedia
-
-
Check it out, our CEO, Robert Caulk hopped on a geek out with Nicolay Christopher Gerold on the How AI Is Built podcast, where they went deep on context engineering, data structuring (especially with Qdrant), and internet slop. #llm #rag #news #contextengineering
Working to make AI boring (predictable, reliable, safe). CEO @AISBACH | Host How AI Is Built | xTUM, xNYU
Clean data beats clever models. New episode with Robert Caulk dropping tomorrow. We will talk a lot (literally a lot, longest podcast to date) about context engineering and the combination of knowledge graphs, vector search, and LLMs. You can throw noise at an LLM and get something that looks good, but if you want real accuracy, you need to strip away the clutter first. Every piece of noise pulls the model in a different direction. How AI Is Built #llm #rag
-
-
????????????????????????: For ???????????? ??.???? - ???? ?????????????? ??????????????????, and it’s a lot faster than RocksDB! (https://lnkd.in/dTDbxdNY) Our in-house storage engine for sparse vectors and payload has doubled our ingestion speed and eliminated those annoying latency spikes. Qdrant’s Core Team built Gridstore from the ground up with a unique architecture that features multiple layers that optimize retrieval and storage management. ???????? ???????????? ???????? ???????? ???? ???????????????? ?????? ???????????? ?????? ???????? ???? ?????????????? ?????? ???????? ???? ???????????????????? ???????????? ???? ???????????????? ????????—even when dealing with variable-sized data. ????????????????????????: In Gridstore, fixed-size blocks hold the data, and pointers indicate where a value begins and its length within these blocks. ?????? ?????????????? is essentially an array of these pointers, enabling fast lookups by key. ?? ???????????? (or mask layer) keeps track of which blocks are occupied or free. ?????? ???????? ?????????? then uses this bitmap to identify large contiguous free spaces, streamlining space allocation and updates. ????????????????????: In our benchmark tests, Gridstore outperformed RocksDB with excellent results. ???????? ?????????????????? ?????? ?????????? ???? ????????, and the engine maintained smoother throughput under heavy workloads. With a comparable storage footprint, the dramatic boost in speed and consistent performance under pressure is truly remarkable. Read our latest article on Gridstore: https://lnkd.in/dTDbxdNY This breakthrough isn’t just an upgrade—it’s a whole new way to think about data storage. Stay tuned for more updates as we add this to a Rust crate in the near future!
-
-
Stop scraping, start scaling. AskNews tournament-winning premium news API is powering geopolitical forecasting tools, prediction markets, and finance applications. Your LLM is what it eats ?? -> ??, don't settle with dirty data. Let your LLM taste the scotch of real-time and archive news data. 1. Reduce your LLM costs with AskNews proven token optimized data. 2. Join 2000+ developers building hallucination-free applications 3. Leverage AskNews' state-of-the-art research with a single line of code. #llm #rag #agent #newsapi #news #contextengineering #api
Stop scraping, start scaling.
-
Join analysts around the world making their own Smart News Alerts ??. Imagine: "Alert me if a protest breaks out in Paris" or in the video below, "Alert me if news breaks that will affect Oil and Gas prices" With AskNews' Smart News Alerts, you now have the superpower of monitoring AskNews' premium news feed, Telegram Channels, Reddit, and X, across hundreds of countries and languages for proactive threat monitoring, automated report generation, intelligent media tracking, and much more. Direct your alerts to your Slack channels, send reports/newsletters to your email inbox, or draft reports started in your Google Docs. All from the comfort of our state-of-the-art news analytics dashboard, Newsplunker.com. The best part? You can even integrate Smart Alerts with custom webhooks into your own application with our industry leading API at https://lnkd.in/dCTt9NES . #alerts #news #threatmonitor #newsplunker #analyst #mediamonitor #threatdetection #newsletter #llm #smartmonitor #smartalert #twitter #reddit #news
Stop scrambling for news, start surfing it.
-
Backtesting forecasts with LLMs presents unique challenges. It is hard to tease out what the LLM is predicting versus what the LLM has seen in its vast training corpus. When it comes to backtesting - most of the bot makers Metaculus identify the knowledge cutoff and then use a news archive with strict time-indices to backtest methodologies on the time period between the knowledge cutoff and the current day. It works extraordinarily well. Additionally, Metaculus has nailed the benchmark with real-time live comparisons across the top LLMs (and across the top forecasting bot makers). Check it out: https://lnkd.in/dnUCskfP #forecasting #llm #prediction #metaculus
?? Czech National Bank (?NB) just released a research article on using LLMs for inflation forecasting... and it's a bit of a mess. ?? So what’s this article about? The authors attempted to use LLMs to predict “year-over-year non-seasonally adjusted CPI inflation” in the Czech Republic. They took historical data from 2019 to 2024 and let the LLM do the forecast : """ "Assume you are at time T. Please provide your best forecast for the year-over-year non-seasonally adjusted CPI inflation in the Czech Republic for the next year. Provide only a single number. Do not use any information you did not have at time T." """ As for result, they make a strong claim: The language models ChatGPT and Grok were able to generate inflation forecasts that, in some periods, were more successful than professional analysts and even the Czech National Bank's own model. What's the issue? They essentially tested the model using the same data it was trained on—comparable to cheating on a test by peeking at the answers beforehand. The "restriction" they tried to enforce in prompt, instructing the model to only consider past information, is no hard restriction and can't really prevent model from using the data during prediction! In reality, what they’re doing isn’t true forecasting—it’s information retrieval. Proving that the LLM uses information from "future" is unfortunately impossible, but I think there are two clues that it's happening and that the whole evaluation setup is flawed! 1. Predicting post-knowledge cutoff. OpenAI's o1 model has a knowledge cutoff at October 2023. This means any inflation predictions beyond this date are a fair, test since the model has no prior knowledge of that data. Interestingly, ?NB provides results at similar profile regions (stable inflation) both before and after this cutoff. What’s the difference? After the knowledge cutoff the RMSE jumps from 1.09 to 1.91 ??! Likely because now the model is actually predicting unseen data. 2. GPT-4o-mini is better than o1 I replicated the experiment with more models: - GPT-4o - GPT-4o-mini - OpenAI’s o1 GPT-4o-mini had the lowest RMSE. No, this doesn’t mean that the OpenAI's smallest model is the best. It simply shows that the task isn’t about which model is “better” but rather which one is less prone to recalling data from its training set. So it seems there is some issue with the evaluation setup, but how can I even know that the models are trained on the CZ CPI data? Well, I literally asked O1 to tell me the exact values for the given dates, and the error was absolutely slim. Final thoughts: I genuinely appreciate that the ?NB is experimenting with LLMs—it’s exciting to see a state institution diving into this space. But the way they conduct the whole experiments, rather looks like there is some misunderstanding how the LLMs work.
-