February 20, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
To make generative AI’s potential a reality for a physical business, two crucial elements come into play: people and data. Investing in a highly skilled team is a given precondition for success with any business. Also critical is having a diversity of expertise, as well as a diversity of experiences, cultural touch points, and background. Drawing on this expertise and experience to inform how generative AI is developed allows more context to be built-in, and the models can be expanded to serve a global audience versus a regional or national one. Data quality in both edge computing and generative AI models is crucial. This is what has driven Motive to invest in a truly world-class annotations team. Because accuracy is so critical for the safety and optimization of our customers, this team ensures that the processes behind our use of generative AI are strong and consistent. These processes include ensuring the highest quality data and labels to train our models, and thus our products and services. At the same time, generative AI in the physical economy will only be as useful as the insights and capabilities it creates.
There is plenty of anecdotal evidence in the industry where GCs have taken on data center projects in EU regions and have not fully understood the local resourcing requirements and supply chain logistics. In addition, they have incorrectly assumed that a UK labor force will be as effective as normal, when they are on rotational-based attendance in a regional project office. Instead, the solution may lie in developing smaller, fully supported, highly competent, highly motivated, and well-compensated teams capable of delivering increased outputs to realize your competitive potential – a theme also adopted by the World Quality Week in 2023. To meet the strong imperative for quick time-to-market in the industry within the context of an acute skills shortage, we argue that the solution lies in focusing on training people and empowering them with the capabilities of AI. Streamlined, lean teams with mature AI tools have a better chance of efficiently delivering on larger projects. Investment in training is crucial across the industry, particularly innovative approaches that enable smaller teams to achieve more thanks to AI assistance and other technological advancements.
To be truly meaningful in addressing the pain associated with data and AI pipelines, data observability tools must expand into FinOps. It’s no longer enough to know where a pipeline stalls or breaks -- data teams need to know how much the pipelines cost. In the cloud, inefficient performance drives up computing costs, which in turn drives up total costs. Tools must encompass FinOps to provide observability into costs pertaining to both infrastructure and computing resources, broken down by job, user, and project. They must also include advanced analytics to provide guidance on how to make individual pipelines cost-efficient. This will free up data teams to focus on strategic decision-making rather than spending their time reconfiguring pipelines for cost. ... To meet these demands, data observability solution vendors must offer custom products that allow customers to see on a platform-specific level such things as detailed cost visibility, efficient management of storage costs, chargeback/showback, and where the expensive projects, queries, and users lie.
领英推荐
Effective testing is not just about covering every line of code. It's about understanding the underlying relationships. How do we effectively test the complex relationships in our software code? Understanding functions and relations proves an invaluable asset in this endeavor. ... It's worth noting that while all programs can be viewed as functions in a broad sense, not all are "pure" functions. Pure functions have no side effects, meaning they solely rely on their inputs to produce outputs without altering any external state. In contrast, many practical programs involve side effects, complicating their pure function interpretation. ... While functions provide clear input-output connections, not all relationships in software are so straightforward. Imagine tracking dependencies between tasks in a project management tool. Here, multiple tasks might relate to each other, forming a more complex network. ... Relations can sometimes group elements into equivalence classes, where elements within a class behave similarly. Testers can leverage this by testing one element in each class, assuming similar behavior for others, saving time and resources.
Mozilla said it could find only one chatbot that met its minimum security standards, with a worrying lack of transparency over how the intensely personal information that might be shared in such apps is protected. Almost two thirds of the apps didn’t reveal whether the data they collect is encrypted. Just under half of them permitted the use of weak passwords, with some even accepting a password as flimsy as “1”. More than half of the apps tested also failed to let users delete their personal data. One even claimed that “communication via the chatbot belongs to the software.” Mozilla also found the use of trackers—tiny pieces of code that gather information about your device and what you do on it— was widespread among the romantic chatbots. ... The main tip is not to say anything to the chatbot that you wouldn’t want friends or colleagues to discover, as the privacy of these services cannot be guaranteed. Also use a strong password, request that personal data is deleted once you’ve finished using the chatbot, opt out of having your data used to train AI models and don’t accept phone permissions that give the chatbot access to your location, camera, microphone or files on your device.
A beautiful symphony requires more than just individual talent. Ethical considerations like potential biases and misinformation risks demand attention. We must ensure responsible development, ensuring these LLMs don’t become instruments of discord but rather powerful tools for good. The potential for collaboration is even more exciting. Imagine Bard fact-checking Claude’s poems, or Qwen providing real-time data for GPT-3.5-Turbo-0613’s code generation. Such collaborations could lead to groundbreaking innovations, a true ensemble performance exceeding the capabilities of any single LLM. This is just the opening act of a much grander performance. As the music evolves, LLMs hold immense potential. Advancements in natural language understanding could enable nuanced conversations, personalized education could become a reality, and creative collaboration could reach unprecedented heights. This orchestra is just beginning its performance, and the future holds a symphony of possibilities waiting to be composed. In short, The key lies in understanding their technical nuances, recognizing their individual strengths, and fostering responsible development.?