NLP is heading to Chicago for #ASTC2024! Say hello to Tim Lay and Mark Dorgan at Booth 425 in the exhibit hall (9/28-9/29). We hope to see you there!
Northern Light Productions的动态
最相关的动态
-
Dr. Xiao-Li Meng, Harvard professor and chair, recently wowed the audience at #ODSC East with his engaging session and sense of humor. Attendees not only had the opportunity to learn from his valuable insights, but also had the chance to receive a signed book by asking meaningful questions. To top it off, a lucky attendee even received 2 of his autographed books! Thank you, Dr. Meng, for sharing your expertise with us at #ODSC East in Boston. #BOSTON #DATASCIENCE #HARVARD #AI #OPENAI #STATISTICS #ODSCEAST #ODSC2024 #AI #MACHINELEARNING #AIPLUSTRAINING
要查看或添加评论,请登录
-
Deep Learning With DAGs: The authors introduce a novel approach to causal inference that leverages deep #neuralnetworks to empirically evaluate theories represented as DAGs. Read: https://spkl.io/604847AfI Subscribe: https://spkl.io/604947AfL Stone Center for Research on Wealth Inequality & Mobility #ArtificialIntelligence
要查看或添加评论,请登录
-
Learning about foresight with other practitioners is not only helpful, it's intellectually challenging and fun. Some of my favorite parts of our upcoming foresight classes: Three Horizons of AI: it's like a tiny ethnographic study of what people from all over the world and in super diverse professional contexts are doing with genAI. We get to share questions, ideas, fears, and insights about the AI-enabled world that we see unfolding around us. Meet us for two hours, 3 Wednesdays in a row, starting Sept. 4: https://lnkd.in/gT8cSDwD Scenario Building: No matter how many times we deliver this class—which builds on Jim Dator's classic Alt Futures framework—it never fails to teach me something new about this profound, complex, and rather tricky scenario methodology. IFTF's Kathi Vian created a step-by-step toolkit that peels back the creative dilemmas of scenario development. Dive in with us for five 3-hour sessions over 2.5 weeks in October: https://lnkd.in/gH_hwP2v
要查看或添加评论,请登录
-
Excited to share my latest achievement! Completing the "Generative AI with Large Language Models" course has been truly enriching. From grasping the basics of Gen AI's transformer architecture and prompt engineering techniques to delving into advanced techniques like multiple model fine-tuning methods, chain-of-thought (CoT) prompting, reinforcement learning from human feedback (RLHF), model evaluation and many more, each step has been captivating. Indeed "Attention Is All You Need"!
要查看或添加评论,请登录
-
I just finished reading the brilliant "Taming Silicon Valley: How We Can Ensure That AI Works for Us" by Gary Marcus Gary Marcus, an Emeritus Professor of Psychology and Neural Science at NYU, founder of Geometric.AI (later sold to Uber) and founder of Robust.AI, brings his expertise to this examination of the technology's impact on society. As a long-time, outspoken critic of AI hype and the notion that machine learning alone can solve general intelligence and secure a positive future for humanity, Marcus offers a refreshing perspective in this three-part book. The work begins by outlining the current applications of AI and their associated downsides, providing readers with a clear understanding of the present landscape. Marcus then delves into the business and political frameworks that have created these issues and that promise to maintain and worsen them. The most compelling section is the final part, where Marcus empowers readers by exploring in depth what individuals can do to create the future we want. This concrete, action-oriented approach sets the book apart from mere critiques of the industry. I'm here for it!
要查看或添加评论,请登录
-
As I ramp up my content creation, I'm faced with an abundance of ideas and topics that have piled up in my notebooks. From diving into the practical applications of the COM-B model and Behaviour Change Wheel to analyzing the intersection of AI and behavioral science, there's so much I want to write about! So, I'd love your help in prioritising my content ideas - which topics would you find most valuable for me to cover? To make things easier, I've created a short form with various topics to choose from, including: Discover new theories or models in BeSci Practical applications of the COM-B model and Behaviour Change Wheel Systems thinking in BeSci Critical analyses and myth-busting articles in BeSci Intersection of BeSci and artificial intelligence (AI) Case studies and real-world examples from my professional experience Exploring the influence of culture on BeSci research and applications Book recommendations related to BeSci and its applications Summaries and key takeaways from recent BeSci research papers Curated reading and listening recommendations ....and anything else that's on your mind! You can find it here - it's only 3 questions, and will take you less than 30 seconds! https://lnkd.in/e96geEsW #behavioralscience #behaviorchange
要查看或添加评论,请登录
-
?? Excited to share our latest research on "Securing Social Spaces: Harnessing Deep Learning to Eradicate Cyberbullying." Our paper sheds light on the seriousness of cyberbullying and the need for more accurate tools to detect and address it. We introduce a deep learning-based approach, achieving an 89.16% accuracy rate in predicting cyberbullying instances. This research is a significant step toward a safer digital landscape. Read the full paper here: https://bit.ly/43NLAbU. #SocialMedia #Cyberbullying #DeepLearning #ResearchPublication
要查看或添加评论,请登录
-
I hear a lot about #AGI and how #LLMs may lead us to it. These discussions seem to now be in the public eye, but they have been of private interest to niche groups for a long time. A book that I really enjoyed reading, almost 20 years ago, on this topic is "The Mind's I". It is a collection of essays by many different authors, collected by and commented on by Douglas Hofstadter and Daniel Dennet. I think there is a lot of interesting developments in AI (especially around LLMs and also the production of novel mathematics). However, I think we are very far from a discussion about "conscious" machines or "intelligent" machines. Creativity, the kind which can't be taught at a university, is still firmly in the domain of human beings (as developments in AI itself prove). Let's keep an open mind and see what comes. In the meantime, or anyone interested in these topics, I highly recommend getting acquainted with the discussions which have already occured in the circles of people who are curious about consciousness. They have thought about machines for a long time!
要查看或添加评论,请登录
-
Future of Coding Schr?dinger's Wiki: A wiki with all the articles, each page "collapses" into existence on the first observation/visit. The generation process uses Slack messages, podcast transcripts and community-adjacent papers to produce articles grounded in FoC's ideas. It also has hypermedia features to navigate and explore the references in the same application. #llm #ai #rag #wiki
要查看或添加评论,请登录
-
I hosted the biggest event night of my life in the heart of Miami ... When I was attending other events, I never thought that some day I will be the one to host. We hosted The GenAI Collective?Miami's second event in collaboration with GPTuesdays at the Miami Dade College Hi Tech AI Center. We were thrilled to have over 200 attendees join us for an exciting event featuring three exceptional speakers on AI. ?? Event TL;DR : 1.?Aleksey Romanov :?NLP: How Did We Get Here? ?? Explored the evolution of NLP from basic models like TF-IDF to advanced architectures like Transformers. Key takeaway: understanding historical context helps us to understand and overcome shortcomings in the current state-of-the-art. 2.?Alexander Comerford :?Benchmarks: What Do They Tell Us? ?? Discussed the importance and limitations of benchmarking LLMs, highlighting that benchmarks can be marketing tools rather than true performance indicators. Key takeaway: while benchmarks can help us understand what different models do well, it’s important to understand how we can be fooled when they’re used as marketing tools. 3.?Kye G.:?Automated Prompt Engineering ?? Introduced methods for optimizing prompts to improve AI performance, emphasizing that automating this process saves time and enhances efficiency. Key takeaway: in the same way LLMs reliably structure human-readable content, they can also structure LLM prompts reliably and far more quickly than humans. Kudos to Grant Kurz of GPTuesdays for not just selflessly doing this event but all the amazing event he has putting together. And to my buddies Larissa Macko and Charles Whiteman wonderfully coordinating the event. See you in the next one.
要查看或添加评论,请登录