Azure for Developers (AZ-204)
AZ-204 Preparation Guide

Azure for Developers (AZ-204)

This article is for cloud developers who participate in all phases of development from requirements definition and design to development, deployment, and maintenance. They partner with cloud DBAs, cloud administrators, and clients to implement solutions. It will help you understand Azure SDKs, data storage options, data connections, APIs, app authentication and authorization, compute, and container deployment, debugging, performance tuning, and monitoring.

Azure App Service Web Apps

An HTTP based service for hosting Web Apps, REST APIs, and mobile back ends. It has built-in autoscale support. Integrates out of the box with CI/CDs on Azure DevOps, GitHub, BitBucket, FTP or a local Git repository. Supports deployment slots for Blue/Green deployments in the Standard and Premium plans. App Service can also host Web Apps natively on Linux for supported application stacks. It can also run custom Linux containers (aka Web App for Containers).

An App Service Plan defines a set of compute resources. One or more Web Apps, API Apps, Mobile Apps or Functions can run on the same App Service Plan. There are several pricing tiers (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated) with the last one running in dedicated Azure VMs on dedicated Azure Virtual Networks. It provides network isolation on top of compute isolation to your apps and the maximum scale-out capabilities. Function Apps has also a consumption tier which scales dynamically depending on workload.

An App Service Plan can be scaled up and down by changing the pricing tier. You can potentially save money by putting multiple apps into one App Service plan if sharing the same compute resources is not an issue. Isolating your App in a new App Service Plan is better when you need to scale the resources independently or if the App needs resources in a different region. In the Free and Shared tiers, the VMs are shared with other Azure Customers.

The built-in authentication feature for App Service and Azure Functions can save you time and effort by providing out of the box authentication with federated identity providers.

There are two main deployment types for Azure App Service. The multitenant public service hosts and the single-tenant hosts directly in your Azure VNet.

Multitenant App Service networking features

Azure App Service is a distributed system. The roles that handle incoming HTTP or HTTPS requests are called front ends. The roles that host the customer workload are called workers. All the roles in an App Service deployment exist in a multitenant network. Because there are many different customers in the same App Service scale unit, you can't connect the App Service network directly to your network.

Autoscaling

Autoscaling is a cloud system or process that adjusts available resources based on the current demand. Autoscaling performs scaling in and out, as opposed to scaling up and down. It can be triggered according to a schedule, or by assessing whether the system is running short on resources. An App Service monitors the resource metrics of a Web App as it runs. Autoscaling doesn't have any effect on the performance of a web server powering the app, it only changes the number of web servers.

Autoscaling isn't the best approach for long-term growth. Autoscaling has an overhead associated with monitoring resources and determining whether to trigger a scaling event. If you can anticipate the rate of growth, manually scaling the system over time may be a more cost-effective approach. You can configure autoscaling to detect when to scale in and out according to a combination of factors or according to a schedule. The metrics you can monitor for a web app are CPU percentage, memory percentage, disk queue length, HTTP queue length, and data in and data out.

A few good practices to consider when creating your autoscale rules are:

  • Ensure the maximum and minimum values are different and have an adequate margin between them
  • Choose the appropriate statistic for your diagnostics metric
  • Choose the thresholds carefully for all metric types
  • Always select a safe default instance count
  • Configure autoscale notifications

Deployment Slots

When you deploy to Azure App Service, you can use a separate deployment slot instead of the default production slot when you're running in the Standard, Premium, or Isolated App Service plan tier. Deployment slots are live apps with their hostnames. App content and configuration elements can be swapped between two deployment slots, including the production slot. The benefit of this feature is that you can validate app changes in a staging deployment slot before swapping it with the production slot (Blue/Green deployments). Also, it makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy. The traffic redirection is seamless, and no requests are dropped because of swap operations. There's no additional charge for using deployment slots.

Azure Functions

Most commonly used for data, image, and order processing, integrating systems, IoT, simple APIs, microservices, file maintenance, or for any tasks that you want to run on a schedule. Functions support triggers, which are ways to start the execution of your code, and bindings, which are ways to simplify coding for input and output data. Both Functions and Logic Apps enable serverless workloads.

Azure Functions vs Azure Logic Apps

Functions and Logic Apps both enable serverless workloads. Azure Functions is a serverless compute service, and Azure Logic Apps provides serverless workflows.

Azure Functions vs Azure Logic Apps

Azure Functions vs WebJobs

Azure App Service WebJobs with the WebJobs SDK is a code-first integration service that is designed for developers. Both are built on Azure App Service and support features such as source control integration, authentication, and monitoring with Application Insights integration.

Azure Functions vs WebJobs

Azure Functions Hosting Plans

There are three basic hosting plans available for Azure Functions: Consumption plan, Functions Premium plan, and App service (Dedicated) plan. There are two other hosting options which provide the highest amount of control and isolation in which to run your function apps, those are App Service Environment and Kubernetes.

If you run on a Dedicated plan, you should enable the Always-on setting so that your function app runs correctly. On an App Service Plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP triggers will “wake up” your functions. Always on is available only on an App Service plan. On a Consumption plan, the platform activates function apps automatically.

In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding additional instances of the Functions host. The number of instances is determined by the number of events that trigger a function.

Blob Storage

Azure Blob storage is Microsoft's object storage solution for storing massive amounts of unstructured data, such as text or binary data, in the cloud. Blob storage is designed for:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access.
  • Streaming video and audio.
  • Writing to log files.
  • Storing data for backup and restore, disaster recovery, and archiving.
  • Storing data for analysis by an on-premises or Azure-hosted service.

Data can be accessed via HTTP/HTTPS, from anywhere in the world. Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.

Azure Storage offers two performance levels:

  • Standard which supports Blob, Queue, Table Storage and Files and
  • Premium runs on SSDs which support block blobs, page blobs, or file shares.

To be a cost-effective solution, block blob data comes as Hot, Cool and Archive access tiers. Early in the lifecycle, people access some data often. But the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. The Cool option is for data that is infrequently accessed and stored for at least 30 days and the Archive option is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements, on the order of hours.

Blob storage offers three types of resources, the account, a container in the account, and a blob container. A storage account provides a unique namespace. If your storage account name is "mystorage101" then the default endpoint will be https://mystorage101.blob.core.windows.net. A container is like a directory in a file system. Azure Storage supports 3 different types of blobs:

  • Block Blobs which store text and binary data up to 4.7TB
  • Append Blobs which are optimized for append operations like logging data
  • Page Blobs which can store VHD and serve as disks for Azure VMs and supports up to 8TB

Azure storage has several security features such as Azure AD and RBAC. Data can be secured in transit with Client-Side Encryption, HTTPS or SMB 3.0. Data disks can also be encrypted. Azure Storage automatically encrypts your data when persisting it to the cloud with encryption similar to Bitlocker on Windows. Encryption cannot be disabled. You can rely on Microsoft-managed keys for the encryption of your storage account, or you can manage encryption with your keys.

Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events. The factors that help determine which redundancy option you should choose to include how your data is replicated in the primary region, whether your data is replicated to a second region that is geographically distant to the primary region, whether your application requires read access to the replicated data in the secondary region.

Data in an Azure Storage account is always replicated three times in the primary region. Either LRS (Locally redundant storage) or ZRS (zone redundant storage). The difference is that the first create 3 copies in the same physical location and the second across 3 different availability zones.

If your application needs high durability you can add a secondary region in which you can choose between GRS (Geo-redundant storage) and GZRS (Geo-zone-redundant storage) which both copy your data asynchronously three times within the secondary single physical location after they complete the LRS or ZRS copy synchronously.

Azure Cosmos DB

Azure Cosmos DB is designed to provide low latency, elastic scalability of throughput, well-defined semantics for data consistency, and high availability. With its novel multi-master replication protocol, every region supports both writes and reads, unlimited elastic write and read scalability, 99.999% read and write availability all around the world, and guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.

You can create one or multiple Azure Cosmos databases under your account. A database is analogous to a namespace. A database is the unit of management for a set of Azure Cosmos containers. An Azure Cosmos container is the fundamental unit of scalability. You can virtually have an unlimited provisioned throughput (RU/s) and storage on a container. When you create a container, you configure throughput either in dedicated provisioned throughput mode or in shared provisioned throughput mode. The second mode shares the provisioned throughput with other containers in the same database.

Azure Cosmos DB approaches data consistency as a spectrum of choices instead of two extremes.

  • Strong
  • Bounded Staleness
  • Session
  • Consistent Prefix
  • Eventual

The consistency levels are region-agnostic and are guaranteed for all operations regardless of the region from which the reads and writes are served, the number of regions associated with your Azure Cosmos account, or whether your account is configured with single or multiple write regions. In practice, you may often get stronger consistency guarantees. Consistency guarantees a read operation corresponding to the freshness and ordering of the database state that you request.

Azure Cosmos DB offers multiple database APIs, which include the Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. By using these APIs, you can model real-world data using documents, key-value, graphs, and column-family data models.

The cost of all database operations is normalized by Azure Cosmos DB and is expressed by RU (Request Units). A request unit is a token that represents the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. The cost to do a point read, which is fetching a single item by its ID and partition key value, for a 1KB item is 1RU. There are 3 different types of Azure Cosmos accounts you can use to determine the way RUs consumption is getting charged. Provisioned throughput mode where you provision some RUs for your application on a per-second basis, Serverless mode and Autoscale mode with the last one where you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage.

IaC with ARM Templates

Azure VMs is an on-demand scalable computing resource that Azure offers. When you need more control over the computing environment than the other choices offer then you choose a VM. They can be used for development, testing, and production to run your applications in the cloud or to extend your datacentre capabilities. A few important aspects to consider before using Azure VMs are availability, size, storage (HDD or SSD), and account limitations.

Regarding availability, Azure offers several options to ensure durability through a combination of a fault domain and an update domain. An availability set is composed of two additional groupings that protect against hardware failures and allow updates to safely be applied. You can also combine an Azure Load Balancer with an availability zone or availability set to get the most application resiliency.

Regarding size, Azure offers five different types with General Purpose as a dev/test solution, then Compute, Memory or Storage Optimized machines according to your needs and for high performance two different types, GPU optimized for heavy graphic rendering and high compute performance with optional high throughput network interfaces.

When an Azure user sends a request from any of the Azure tools, APIs, or SDKs, the Resource Manager receives the request. It authenticates and authorizes the request. Resource Manager sends the request to the Azure service, which takes the requested action. Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools. Many advantages to using ARM templates among them declarative syntax, repeatable results, and orchestration. You can deploy an ARM template using Azure portal, Azure CLI, PowerShell, REST API, Button in GitHub repository or Azure Cloud Shell. You can also use the template for updates to the infrastructure. For example, you can add a resource to your solution and add configuration rules for the resources that are already deployed. If the template specifies creating a resource but that resource already exists, Azure Resource Manager performs an update instead of creating a new asset. When deploying your resources, you specify that the deployment is either an incremental update or a complete update. The difference between these two modes is how Resource Manager handles existing resources in the resource group that aren't in the template. The default mode is incremental. In complete mode, the Resource Manager deletes resources that exist in the resource group but aren't specified in the template. In incremental mode, the Resource Manager leaves unchanged resources that exist in the resource group but aren't specified in the template.

The ACR (Azure Container Registry) service allows you to build container images in Azure or to use them with your existing container development and deployment pipelines. Build on-demand, or fully automate builds with triggers such as source code commits and base image updates. They can be used for scalable orchestration systems and other Azure services that support building and running applications at scale. Developers can also push to a container registry as part of a container development workflow. Grouped in a repository, each image is a read-only snapshot of a Docker-compatible container. Azure container registries can include both Windows and Linux images. Every Basic, Standard, and Premium Azure container registry benefits from advanced Azure storage features like encryption-at-rest for image data security and geo-redundancy for image data protection.

ACI (Azure Container Instances) is a solution for any scenario that can operate in isolated containers, including simple applications, task automation, and building jobs. Some of its benefits are fast startup, container access, and hypervisor level security. customer data, custom sizes, persistent storage for the state, and working with both Linux and Windows containers.

User Authentication and Authorization

The Microsoft Identity Platform helps your users and customers sign in and get authorized access to use your or Microsoft APIs. The components that make up the Microsoft Identity Platform are OAuth 2.0 and OpenID Connect standard-compliant authentication service, Open-source libraries (MSAL), Application management portal, Application configuration API and PowerShell.

To be able to delegate Identity and Access Management functions to Azure Active Directory, an application must be registered with an Azure AD Tenant. It can be registered either as a Single Tenant or as a Multi-tenant application. Once you register your application in the portal, an application object and a service principal are created automatically. An application object is used as a blueprint to create one or more service principal objects. A service principal is created in every tenant where the application is used. The security principal defines the access policy and permissions for the application in the Azure AD tenant. This enables core features such as authentication of the application during sign-in, and authorization during resource access. There are three types of the service principal. Application, which is the local representation of a global application object in a single tenant or directory. Managed identity, which provides an identity for applications to use when connecting to resources that support Azure AD authentication and Legacy which is an app created before app registrations were introduced or an app created through legacy experiences.

The Microsoft identity platform implements the OAuth 2.0 authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or application ID URI for example for the Microsoft 365 Mail API you use https://outlook.office.com, for the Azure Key Vault: https://vault.azure.net and so on. The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. For example, the permission string https://graph.microsoft.com/Calendars.Read is used to request permission to read users' calendars in Microsoft Graph.

The Microsoft identity platform supports two types of permissions. Delegated permissions are used by apps that have a signed-in user present. The app has delegated the permission to act as a signed-in user when it makes calls to the target resource. Application permissions on the other hand are used by apps that run without a signed-in user present for example background services or daemons.

Applications in the Microsoft Identity Platform rely on consent to gain access to necessary resources or APIs. There are three consent types: static user consent, incremental and dynamic user consent, and admin consent. In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. With the Incremental and Dynamic user content, you can ask for a minimum set of permissions upfront and request more over time as the customer uses additional app features. Finally, Admin consent is required when your app needs access to certain high-privilege permissions. Conditional Access is also a solution that enables developers and enterprise customers to protect services with MFA, by allowing only Intune enrolled devices to access specific services, and by restricting user locations and IP ranges.

MSAL (Microsoft Authentication Library) can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS. MSAL gives you many ways to get tokens, with a consistent API for several platforms. The benefits of using MSAL are that there is no need to directly use the OAuth libraries or code against the protocol in your application, that acquires tokens on behalf of a user or on behalf of an application, that you don't need to handle token expiration on your own, that helps you specify which audience you want your application to sign in, that helps you set up your application from configuration files, and that helps you troubleshoot your app by exposing actionable exceptions, logging, and telemetry.

SAS (shared access signature) is a signed URI that points to one or more storage resources and includes a token that indicates how the resources may be accessed by the client. Azure Storage supports three types of shared access signatures. User delegation SAS which is secured with Azure AD credentials, a Service SAS which is secured with a storage account key, and an Account SAS which supports all of the operations available via a service or user delegation SAS as well. Use a SAS when you want to provide secure access to resources in your storage account to any client who does not otherwise have permission to those resources.

Secure Cloud Solutions

Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys. The Azure Key Vault service supports two types of containers: vaults and managed hardware security module(HSM) pools. Vaults support storing software and HSM-backed keys, secrets, and certificates. Managed HSM pools only support HSM-backed keys. You can use Azure Key Vault for secret, key and certificate management. Key benefits of using it are security, the monitored activity of access and use, and that it offers centralized storage with simplified administration.

Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. There are two types of managed identities, the system-assigned and the user-assigned. Internally, managed identities are service principals of a special type, which are locked to only be used with Azure resources.

App Configuration is the appropriate service to store all the settings for your application and secure their access in one place. It comes with many benefits such as flexible key representations and mappings, easy and fast set-up, dedicated UI for feature flag management, enhanced security through Azure-managed identities, encryption at rest and in transit, and native integration with popular frameworks. Azure App Configuration stores configuration data as key-value pairs.

API Management

API Management is a hybrid, multi-cloud management platform for APIs across all environments. It provides the core functionality to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. Each API consists of one or more operations, and each API can be added to one or more products. To use an API, developers subscribe to a product that contains that API, and then they can call the API's operation, subject to any usage policies that may be in effect.

The system consists of the API Gateway which is responsible to route calls to your backends, verifying API keys, JWT tokens, certificates and other credentials, enforcing rate limits and usage quotas, caching backend responses, logs call metadata for analytics purposes. Another component is the Azure Portal which you can use to define or import API schema, package APIs, set up policies, get insights from analytics and manage users. Finally, the Developer Portal serves as the main web presence for developers, to read the API documentation, try out APIs via the interactive console and access analytics in their usage.

Products in API Management have one or more APIs and are configured with a title, description, and terms of use. It can be Open (without a subscription) or Protected (subscription-based). Subscription approval is configured at the product level and can either require administrator approval or be auto-approved.

Groups are used to manage the visibility of products to developers. The three immutable system groups are Administrators (that also can create custom groups), Developers (Authenticated developer portal users who are the customers), and Guests (same as developers but unauthenticated).

Policies are a powerful capability of API Management that allows the Azure portal to change the behaviour of the API through configuration.

Event-Based Solutions

Azure Event Grid enables event-driven, reactive programming. It uses the publish-subscribe model. Publishers emit events but do not expect how the events are handled. Subscribers decide on which events they want to handle. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your events, using custom topics.

Event Grid has 5 main components:

  • Event is the smallest amount of information that fully describes something that happened in the system
  • Event source is where the event happens
  • Topics are the endpoint(s) where publishers send events
  • Event subscription is the endpoint or built-in mechanism to route events, sometimes to more than one handler. Subscriptions are also used by handlers to intelligently filter incoming events and
  • Event handlers the service reacting to the event

Event Grid provides the roles of Subscription Reader and Subscription Contributor which allows you to Read or Manage event subscription operations as well as the roles of Event Grid Contributor and Event Grid Data Sender which you can manage resources and send events accordingly.

Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters. Key features of the Azure Event Hubs service are that it is a fully managed PaaS solution, real-time and batch processing and scalability. The main components that consist Event Hub are the Event Hub Client which is the interface for developers, the Event Hub Producer which serves as a source of telemetry data, the Event Hub Consumer which reads information from the Event Hub and allows processing, the Partition which is an ordered sequence of events that are held in an Event Hub, the Consumer Group which is a view of an entire Event Hub, the Event Receivers which reads event data from an event hub, and the Throughput or Processing Units which are pre-purchased units of capacity that control the throughput capacity of Event Hubs.

Message-Based Solutions

Azure supports two types of message queue mechanisms. Service Bus Queues are part of a broader Azure messaging infrastructure that supports queuing, publish/subscribe, and more advanced integration patterns. They're designed to integrate applications or application components that may span multiple communication protocols, data contracts, trust domains, or network environments. Storage Queues are part of the Azure Storage infrastructure. They allow you to store large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.

If your application needs to store over 80GB of messages in a queue or wants to track progress for processing a message in the queue or you need server-side logs of all of the transactions executed against your queues then consider using the Storage queues. For any other case, I would suggest the Service Bus queues as it is a fully managed enterprise integration message broker.

Monitoring and Logging

Azure Monitor delivers is a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments to give you the ability to understand how your applications are performing and proactively identify issues affecting them, and the resources they depend on. It collects data from a variety of sources, from your application to the OS, Azure Services, Azure Subscription, and Azure tenant. The data collected are either metrics (numerical values that describe some aspect of a system at a particular point in time) or logs (different kinds of data organized into records with different sets of properties for each type).

Monitoring data increases your visibility into the operations of your computing environment. And here is where Insights come to play their role in the form of Application, Container, and VM Insights. Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It monitors request rates, response times, failure rates, exceptions, page views and load performance, AJAX calls, user and session counts, performance counters, host diagnostics from Docker or Azure, diagnostic trace logs, custom events and metrics. Container Insights monitors the performance of container workloads that are deployed to AKS and Azure Container Instances. VM Insights monitors your Azure virtual machines at scale.

Integrate Caching and CDN to your Solutions

Caching is a common technique that aims to improve the performance and scalability of a system by temporarily copying frequently accessed data to fast storage that's located close to the application. If this fast data storage is located closer to the application than the source, then caching can significantly improve response times for client applications by serving data more quickly. Azure Cache for Redis improves application performance by supporting data cache, content cache, session store, job and message queuing and distributed transactions.

Conclusion

The content of this article is also material that can help you prepare for the Azure Developer Certification AZ-204. If you’re new to development or just want to adopt automation strategies that make your life easier, then the AZ-204 Exam is entirely worth the investment.

If you have just started your journey towards the cloud, then you can begin with the Azure Fundamentals. Here is my previous article that covers it:

I hope you've enjoyed reading this article as much as I've enjoyed writing it. Feel free to share it.

????Vincent Kok (VK) 郭进强, MCT, ACLP

Microsoft Technical Trainer in AI, Copilot, Power BI | Top 20 IT & Tech LinkedIn Singapore | Aspiring Keynote speaker?? | Cloud Advocate??| 5X Azure | 2X Power Platform | Microsoft Certified Trainer | ACLP Certified

1 年

Superb write up! Thanks for sharing

要查看或添加评论,请登录

Fotios Tragopoulos的更多文章

  • Integration Scenarios

    Integration Scenarios

    Integration between applications and data systems can take many forms, but all business integration needs can generally…

  • From Silos to Synergy: Unleashing the Power of Integration Platforms in Europe

    From Silos to Synergy: Unleashing the Power of Integration Platforms in Europe

    An integration platform is a software solution that aids organizations in integrating data and processes across various…

  • AWS Service List

    AWS Service List

    The number of AWS offerings is constantly growing. Amazon Web Services offers hundreds of services, from compute and…

  • An Overview of AWS and CLF-C01 Exams

    An Overview of AWS and CLF-C01 Exams

    From a Business perspective the AWS Cloud Practitioner certification is necessary to understand the value of the AWS…

    2 条评论
  • Azure DevOps

    Azure DevOps

    DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications…

  • Azure AZ-104 Preparation Guide

    Azure AZ-104 Preparation Guide

    This article intends to serve as a preparation guide for the AZ-104 exams. It is a fast read that cannot replace the…

    1 条评论
  • Kubernetes the Imperative and Declarative way

    Kubernetes the Imperative and Declarative way

    This article is written to hold together definition files - which I find important - in a cheat sheet which can be used…

  • Google Cloud Platform handbook for enthusiasts

    Google Cloud Platform handbook for enthusiasts

    One of the most difficult decisions for a Cloud Engineer is to decide which is the most appropriate cloud provider for…

    1 条评论
  • Javascript Design Patterns

    Javascript Design Patterns

    What is a Pattern? A pattern is a reusable solution for software design. What is NOT a Pattern? Patterns are not an…

  • Azure AZ-900 Preparation Guide

    Azure AZ-900 Preparation Guide

    This article intends to serve as a preparation guide for the AZ-900 exams. It is a fast read that cannot replace the…

社区洞察

其他会员也浏览了