August 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Enterprises have responded to growing storage demands by moving to larger, scale-out NAS systems. The on-premise market here is well served, with suppliers Dell EMC, NetApp, Hitachi, HPE and IBM all offering large-capacity NAS technology with different combinations of cost and performance. Generally, applications that require low latency – media streaming or, more recently, training AI systems – are well served by flash-based NAS hardware from the traditional suppliers. But for very large datasets, and the need to ease movement between on-premise and cloud systems, suppliers are now offering local versions of object storage. The large cloud “superscalers” even offer on-premise, object-based technology so that firms can take advantage of object’s global namespace and data protection features, with the security and performance benefits of local storage. However, as SNIA warns, these systems typically lack interoperability between suppliers. The main benefits of on-premise storage for unstructured data are performance, security, plus compliance and control – firms know their storage architecture, and can manage it in a granular way.
Eventually CXL it is expected to be an all-encompassing cache-coherent interface for connecting any number of CPUs, memory, process accelerators (notably FPGAs and GPUs), and other peripherals. The CXL 3.0 spec, announced last week at the Flash Memory Summit (FMS), takes that disaggregation even further by allowing other parts of the architecture—processors, storage, networking, and other accelerators—to be pooled and addressed dynamically by multiple hosts and accelerators just like the memory in 2.0. The 3.0 spec also provides for direct peer-to-peer communications over a switch or even across switch fabric, so two GPUs could theoretically talk to one another without using the network or getting the host CPU and memory involved. Kurt Lender, co-chair of the CXL marketing work group and a senior ecosystem manager at Intel, said, “It’s going to be basically everywhere. It’s not just IT guys who are embracing it. Everyone’s embracing it. So this is going to become a standard feature in every new server in the next few years.” So how will the application run in enterprise data centers benefit??
Whatever your organization’s preference for team building, it should be carefully selected from a range of options, and it should be clear to everyone why the firm chose one particular structure over another and what’s expected of everyone participating. Start with desired outcomes and cultural norms, then articulate principles to empower action, and, finally, provide the skills and tools needed for success. ... Even in the most forward-thinking organizations, people want to know what a meeting is supposed to achieve, what their role is in that meeting, and if gathering people around a table or their screens is the most effective and efficient way to get to the desired outcome. Is there a decision to be made? Or is the purpose information sharing? Have people been given the chance to opt out if the above points are not clear? Asking these questions can serve as a rapid diagnostic for what you are getting right—and wrong—in your meetings. Poorly run meetings sap energy and breed mediocrity.
领英推荐
That’s not to say that meetings aren’t important, but it makes sense for managers to find the right balance for their teams, said Dan Kador, vice president of engineering at Clockwise. “It's something that companies have to pay attention to and try to understand their meeting culture — what's working and what's not working for them." “It is important that teams get together to discuss things and make sure they are all on the same page, but often meetings are scheduled at regular intervals even if they aren’t necessary,” said Jack Gold Principal analyst & founder at J. Gold Associates. “We are all subjected to weekly meetings, or other intervals, where, even if there is nothing to discuss, the meeting takes place anyway. And some meeting organizers feel obligated to use up the entire scheduled time.” Of course, meeting overload is not just an issue for those writing code. “Too much time spent in meetings is not just a problem for developers,” said Gold. “It is a problem across the board for employees in many companies.”
To counter the threat of e-commerce skimming, the card companies are using the two tools they have in their arsenal again: by making stolen data worthless and by creating new technical security standards. To make stolen payment card data worthless, there’s a chip-equivalent technology for e-commerce called 3-D-Secure v2, which has already been rolled out in the EU. This technology requires something more than just the knowledge of the numbers printed on a payment card to make an online transaction. After entering their payment card data, the consumer may have to further confirm a purchase using a bank’s smartphone app or by entering a code received by SMS. Alongside this re-engineering of the payment system, the latest version of the Payment Card Industry Data Security Standard (PCI DSS) includes new technical requirements to prevent and detect e-commerce skimming attacks. PCI DSS applies to all entities involved in the payment ecosystem, including retailers, payment processors and financial institutions. Firstly, website operators will need to maintain an inventory of all the scripts included in their website and determine why the script is necessary.
The great thing about cloud is you use it when you need it. Obviously, you pay for using it when you need it but often times data science applications, especially ones you’re running over large datasets, aren’t running continuously or don’t need to be structured in a way that they run continuously. Therefore, you’re talking about a very concentrated amount of spend for a very short amount of time. Buying hardware to do that means your hardware sits idle unless you are very active about making sure you’re being very efficient in the utilization of that resource over time. One of the biggest advantages of cloud is that it runs and scales as you need it to. So even a tiny can run a massive computation and run it when they need to and not consistently. That adds challenges, of course. “I fired this thing off on Friday, I come back in on Monday and it’s still running, and I accidentally spent $6,000 this weekend. Oops.” That happens all the time and so much of that is figuring out how to establish guardrails. Sometimes data science gets treated like, “You know, they’re going to do whatever they need to.”