Key Points:
- Service accounts are a necessity for automation in every application architecture and data center. After proving their identity, they allow applications and scripts to access (cloud) resources. Service account keys are as secure (or slightly more secure than) user/password combinations.
- However, over the recent years, we all learned that securing user accounts with only one factor is inadequate.Thus, multi-factor authentication (MFA) – using Authenticator apps or (more legacy) text messages became state of the art. MFA prevents hackers from accessing accounts who stole the password with the second factor.
- The challenge for service accounts is that MFA does not work, and network-level protection (IP filtering, VPN tunneling, etc.) is not consequently applied, primarily due to complexity and costs.
- The major cloud providers such as
Amazon Web Services (AWS)
, Azure,
谷歌
Cloud, and
Alibaba Cloud
collaborate with
GitHub
on security. Beyond the big public clouds, GitHub also scans for credentials related, e.g., to
OpenAI
,
Slack
,
Tableau
, or
Dynatrace
.
- The classic way to handle credential leaks is for GitHub to inform the customer (and service provider) about the leak. Then, the customer's operations and engineering teams fix it.
- Now, Google has changed the game with its recent policy change. If an access key appears in a public GitHub repository, GCP deactivates the key, no matter whether applications crash. Google's announcement marks a shift in the risk and priority tango. Gone are the days when patching vulnerabilities could take days or weeks.
You already know that every day at Data Center Knowledge brings advice, trends and strategies for data center professionals on how to design, build, and manage world-class data centers.
That means original reporting from our team of journalists and unique commentary you won’t see anywhere else! But in case you missed them, here are some of our other must-read favorites from this week:
Navigating the Maze of Data Center Outage Trends
Key Points:
- Data center outages are on the decline, and investment in on-site backup systems is the main reason. That's the one-line takeaway from the latest Uptime Institute study of data center outages.
- Fifty-five percent of organizations reported having experienced a data center outage in the past three years. However, only 27% of organizations that experienced an outage identified it as "significant, "serious" or "severe." Human errors contributed to about half of notable data center outages, with failure by staff to follow procedures topping the list of types of human errors associated with this trend.
- The main reason why data center outages are declining in frequency, according to the
Uptime
research, is that companies have invested in redundancy systems for their facilities.
- On the whole, the report suggests that the following are winning strategies for increasing data center availability and reducing the risk of outages as of 2024. Click the story above for tips on reducing outage risks.
The Data Center Diet Plan
Key Points:
- Composed of representatives from
Amazon Web Services (AWS)
,
谷歌
,
Meta
,
微软
,
Digital Reality, Inc.
, and
施耐德电气
, the
iMasons Climate Accord
is working towards industry-wide adoption of an open standard to report carbon in data center power, materials, and equipment.
- On Tuesday (July 16),?the coalition’s governing body published an open letter calling on all data center suppliers to support greater transparency in Scope 3 emissions via the adoption of Environmental Product Declarations (EPDs)?as part of broader efforts to reduce the industry’s carbon footprint.
- Scope 3 emissions are more difficult to trace as they include indirect emissions from sources such as a data center’s supply chain and customer base, waste management, and business travel. In the face of soaring demand for digital infrastructure and escalating power constraints, sustainability remains one of the data center industry’s most pressing issues.
- “In the past year, we have significantly increased our engagement with major suppliers on material sustainability issues, including the availability of EPDs for products that don’t already have them,”
Aaron Binkley
, vice president of sustainability for Digital Realty, told Data Center Knowledge.
Expanding Chip Research and Development
Key Points:
- The CHIPS for America’s National Advanced Packaging Manufacturing Program will encourage semiconductor manufacturers to explore advanced packaging or solutions that utilize multiple chips and processes on a single unit.?
- In a?notice of intent, the
U.S. Department of Commerce
announced plans to open the research and development project, stating that the program would “establish and accelerate domestic capacity for semiconductor advanced packaging.”
- “This announcement is just the most recent example of our commitment to investing in cutting-edge R&D that is critical to creating quality jobs in the U.S. and making our country a leader in advanced semiconductor manufacturing,” said Commerce Secretary
Gina Raimondo
.
- Program funding was made available under the CHIPS and Science Act, which will explore five areas related to advanced packaging, which will be explored in three state-of-the-art semiconductor research facilities set to open in 2025, 2026, and 2028.
Major Moves Inside the Industry
The Data Center Knowledge News Roundup brings you the latest news and developments across the data center industry – from investments and mergers to security threats and industry trends.
Key Points:
- In a rapidly developing story, security firm?CrowdStrike?warned customers on Friday morning that its Falcon Sensor threat-monitoring product was causing Microsoft’s Windows operating system to crash. It was unclear what triggered the issues, which coincided with disruptions of
微软
’s Azure?cloud?and 365 office?software?services.?
- The cascading failure resulted in computer systems failing around the world.?Outages were reported?at airlines, banks, and the London Stock Exchange.?
- Elea Digital Data Centers?has announced a 120 MW, $1 billion strategic expansion plan to meet Brazil’s “booming demand” for data centers.?The colocation provider’s?latest investment plan?includes the acquisition of two data center campuses in greater S?o Paulo and a largescale footprint expansion of up to 100 MW in the coming years.?
- In the US, meanwhile,?TA Realty?and?EdgeConneX?have unveiled plans to jointly develop a 324 MW data center campus in Atlanta, Georgia, and?Crusoe Energy Systems?is building a?200 MW facility?at the Lancium Clean Campus near Abilene, Texas.?
- Through FASST (Frontiers in Artificial Intelligence for Science, Security, and Technology), the
U.S. Department of Energy (DOE)
and its 17 national laboratories aims to build the “world’s most powerful integrated scientific AI systems” for science, energy, and national security, in collaboration with academic and industry partners.?
Latest Major Tech Layoff Announcements
Original Story by Jessica C. Davis, Updated by Brandon Taylor
Key Points:
Chip Watch: Commentary of the Week
Key Points:
-
Riverlane
, a company specializing in quantum error correction technology, has released a three-year roadmap toward quantum computers being able to run one million reliable quantum operations. Ultimately, quantum computers will need the ability to perform trillions of error-free operations.
- At the heart of Riverlane’s MegaQuOp quest is its quantum error correction technology, Deltaflow, which the company said can scale as the total number of qubits in a quantum computer grows, whatever the qubit type.
- “Our three-year roadmap promises a series of major milestones on the journey to fault tolerance, culminating in enabling one million error-free quantum operations by the end of 2026,” said Riverlane vice president of product and partnerships
Maria Maragkou
.
- Several quantum computing operators have announced?error correction targets and breakthroughs, including
谷歌
, IBM, and
微软
, but according to Riverlane, they need their error correction methodology to scale as fast as the number of qubits grows.
Scott Data Center
OPTIMIZES DATA CENTER WITH OPEN NETWORKING AND EVPN VXLAN
Optimizing Data Centers with Open Networking: Scott Data's Leap to Future-Proof, Cost-Efficient, and Scalable Solutions with UfiSpace and IP Infusion.
No data center can afford to lag behind in today's fast-paced digital landscape. Instead of settling for outdated network solutions, forward-thinking businesses like Scott Data, a nationally recognized Tier III certified multi-tenant data center, are adopting open networking to stay ahead of the game.
Download the eBook to learn:
- The tangible benefits of white-box networking solutions
- How open networking can reduce total cost of ownership while minimizing lead times and complexity
- The strategic advantages that position Scott Data at the forefront of digital transformation in the data center industry
Discover how Scott Data partnered with UfiSpace and IP Infusion to revolutionize their colocation, cloud computing, and disaster recovery services, achieving a scalable, future-proof network that streamlined management and slashed costs.
This is just a taste of what’s going on. If you want the whole scoop, then register for one of our email newsletters,?but only if you’re going to read it.?We want to improve the sustainability of editorial operations, so we don’t want to send you newsletters that are just going to sit there unopened. If you're a subscriber already, please make sure Mimecast and other inbox bouncers know that we’re cool and they should let us through.
Our bi-weekly LinkedIn newsletters arrive on Saturdays, so keep your eyes peeled for the top stories you may have missed between now and then.