GCP Architect Challenge – a quiz

GCP Architect Challenge – a quiz

?? Ready to put your Google Cloud Platform (GCP) skills to the test? Take the "CloudLabs GCP Architect Challenge" – a 15-question quiz designed to challenge future and present GCP architects!

?? Do you have what it takes to optimize CloudLabs' cloud usage for cost efficiency, enhance security, and ensure seamless scalability, all while following GCP best practices?

?? Test your knowledge and prove your expertise in cloud architecture. Answer all 15 questions and discover if you're up to the challenge. Don't worry; I 've provided detailed explanations for the correct answers at the end.

?? Get started and see if you can conquer the CloudLabs GCP Architect Challenge! Your answers are waiting at the end of the quiz. Good luck!

Please note that the questions in this quiz are based on official questions from Google's Professional Cloud Architect certification, which can be found here: https://cloud.google.com/learn/certification/cloud-architect. However, the rationale and explanations for the answers provided here are developed by me to ensure a comprehensive understanding of each question.

Company: CloudLabs

Description: CloudLabs is a startup specializing in developing and delivering cloud-based educational content and labs for IT professionals and students. Their business model focuses on providing hands-on training and certification preparation through their online platform.

Case Scenario: CloudLabs is experiencing rapid growth and needs to optimize their GCP usage to ensure cost efficiency while providing seamless learning experiences. They are looking to enhance security, scalability, and cost control in their GCP environment.

Let's help them by answering 15 questions related to CloudLabs and GCP:

Question 1:

- Scenario: CloudLabs, an online learning platform, is experiencing sluggish performance on their website hosted in their on-premises data center. Users worldwide are complaining about slow response times. Additionally, the website is frequently targeted by distributed denial-of-service (DDoS) attacks from specific IP address ranges. CloudLabs has identified the website as a suitable candidate for migration to Google Cloud. What steps should CloudLabs take to improve access latency and enhance security?

- Options:

- A. Deploy an external HTTP(S) load balancer, configure VPC firewall rules, and migrate the applications to Compute Engine virtual machines.

- B. Deploy an external HTTP(S) load balancer, configure Google Cloud Armor, and migrate the application to Compute Engine virtual machines.

- C. Containerize the application and move it into Google Kubernetes Engine (GKE). Create a GKE service to expose the pods within the cluster, and set up a GKE network policy.

- D. Containerize the application and move it into Google Kubernetes Engine (GKE). Create an internal load balancer to expose the pods outside the cluster, and configure Identity-Aware Proxy (IAP) for access.

Question 2:

- Scenario: CloudLabs plans to connect their on-premises data center, which is located in a remote area over 200 kilometers from a Google-owned point of presence, to Google Cloud. They aim to maintain their existing hardware, and both on-premises servers and cloud resources are configured with private RFC 1918 IP addresses. The service provider offers best-effort internet connectivity with no SLA. What connectivity option should CloudLabs choose?

- Options:

- A. Provision Carrier Peering.

- B. Provision a new Internet connection.

- C. Provision a Partner Interconnect connection.

- D. Provision a Dedicated Interconnect connection.

Question 3:

- Scenario: CloudLabs stores sensitive educational data in Google Cloud Storage buckets. They have a requirement that data stored in Cloud Storage buckets in the "asia-southeast1" region must not leave that geographic area. What should CloudLabs do to meet this requirement?

- Options:

- A. Encrypt the data before storing it in the "asia-southeast1" region.

- B. Enable Virtual Private Cloud Service Controls, and create a service perimeter around the Cloud Storage resources.

- C. Assign the Identity and Access Management (IAM) "storage.objectViewer" role only to users and service accounts that need to use the data.

- D. Create an access control list (ACL) that limits access to the bucket to authorized users only and apply it to the buckets in the "asia-southeast1" region.

Question 4:

- Scenario: CloudLabs is retiring their current Virtual Private Network (VPN) infrastructure. They need to move their web-based sales tools to a BeyondCorp access model. Each sales employee has a Google Workspace account and uses it for single sign-on (SSO). What should CloudLabs do to implement this?

- Options:

- A. Create an Identity-Aware Proxy (IAP) connector that points to the sales tool application.

- B. Create a Google group for the sales tool application and upgrade that group to a security group.

- C. Deploy an external HTTP(S) load balancer and create a custom Cloud Armor policy for the sales tool application.

- D. For every sales employee who needs access to the sales tool application, give their Google Workspace user account the predefined AppEngine Viewer role.

Question 5:

- Scenario: CloudLabs wants to ensure the data compliance of their customers' personally identifiable information (PII). They need to generate anonymized usage reports about their services and delete PII data after a specific period. What should CloudLabs do to achieve this while minimizing costs?

- Options:

- A. Archive audit logs in Cloud Storage and manually generate reports.

- B. Write a Cloud Logging filter to export specific date ranges to Pub/Sub.

- C. Archive audit logs in BigQuery and generate reports using Google Data Studio.

- D. Archive user logs on a locally attached persistent disk and extract them to a text file for auditing.

Question 6:

- Scenario: CloudLabs wants to track whether someone is present in a meeting room reserved for a scheduled meeting. They have motion sensors in each of the 1000 meeting rooms across 5 offices on 3 continents. The sensors report their status every second, but they may have inconsistent connectivity. How should CloudLabs design the data ingestion process for this sensor network?

- Options:

- A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

- B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device-specific table.

- C. Have devices poll for connectivity to Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

- D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingests messages and writes them to Datastore.

Question 7:

- Scenario: CloudLabs needs to ensure that their new gaming platform is operated according to Google best practices. They want to verify that Google-recommended security best practices are being met while also providing the operations teams with the metrics they need. What should CloudLabs do to achieve this? (Choose two correct options)

- Options:

- A. Ensure that you aren't running privileged containers.

- B. Ensure that you are using obfuscated Tags on workloads.

- C. Ensure that you are using the native logging mechanisms.

- D. Ensure that workloads are not using securityContext to run as a group.

- E. Ensure that each cluster is running GKE metering so each team can be charged for their usage.

Question 8:

- Scenario: CloudLabs wants to implement Virtual Private Cloud (VPC) Service Controls to allow Cloud Shell usage by its developers while ensuring that developers do not have full access to managed services. How should CloudLabs balance these goals with their business requirements?

- Options:

- A. Use VPC Service Controls for the entire platform.

- B. Prioritize VPC Service Controls implementation over Cloud Shell usage for the entire platform.

- C. Include all developers in an access level associated with the service perimeter, and allow them to use Cloud Shell.

- D. Create a service perimeter around only the projects that handle sensitive data, and do not grant developers access to it.

Question 9:

- Scenario: CloudLabs is designing service level objectives (SLOs) for their new game that is in public beta. What should CloudLabs consider while defining meaningful SLOs for their game?

- Options:

- A. Define one SLO as 99.9% game server availability and the other SLO as less than 100-ms latency.

- B. Define one SLO as service availability that is the same as Google Cloud's availability and the other SLO as 100-ms latency.

- C. Define one SLO as 99% HTTP requests return the 2xx status code and the other SLO as 99% requests return within 100 ms.

- D. Define one SLO as total uptime of the game server within a week and the other SLO as the mean response time of all HTTP requests that are less than 100 ms.

Question 10:

- Scenario: CloudLabs wants to bring existing recorded video content to new fans in emerging regions. What should CloudL

abs do to achieve this considering their business and technical requirements?

- Options:

- A. Serve the video content directly from a multi-region Cloud Storage bucket.

- B. Use Cloud CDN to cache the video content from their existing public cloud provider.

- C. Use Apigee Edge to cache the video content from their existing public cloud provider.

- D. Replicate the video content in Google Kubernetes Engine clusters in regions close to the fans.

Question 11:

- Scenario: CloudLabs needs to protect customers' personally identifiable information (PII) data while personalizing product recommendations for their large industrial customers. How should CloudLabs achieve data privacy and deliver product recommendations?

- Options:

- A. Use AutoML to provide data to the recommendation service.

- B. Process PII data on-premises to keep private information more secure.

- C. Use the Cloud Data Loss Prevention (DLP) API to provide data to the recommendation service.

- D. Manually build, train, and test machine learning models to provide product recommendations anonymously.

Question 12:

- Scenario: CloudLabs is designing a future-proof hybrid environment that requires network connectivity between Google Cloud and their on-premises environment. What should CloudLabs do to ensure compatibility between Google Cloud and their on-premises networking environment?

- Options:

- A. Use the default VPC in their Google Cloud project and use a Cloud VPN connection between their on-premises environment and Google Cloud.

- B. Create a custom VPC in Google Cloud in auto mode and use a Cloud VPN connection between their on-premises environment and Google Cloud.

- C. Create a network plan for their VPC in Google Cloud that uses CIDR ranges that overlap with their on-premises environment and use a Cloud Interconnect connection between their on-premises environment and Google Cloud.

- D. Create a network plan for their VPC in Google Cloud that uses non-overlapping CIDR ranges with their on-premises environment and use a Cloud Interconnect connection between their on-premises environment and Google Cloud.

Question 13:

- Scenario: CloudLabs set up an autoscaling managed instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, they notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address, and CloudLabs has verified that the appropriate web response is coming from each instance using the curl command. What should CloudLabs do to ensure the backend is configured correctly?

- Options:

- A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.

- B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.

- C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

- D. Create a tag on each instance with the name of the load balancer and configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

Question 14:

- Scenario: CloudLabs has a 3-tier web application deployed in the same Google Cloud Virtual Private Cloud (VPC). Each tier (web, API, and database) scales independently, and network traffic should flow through the web to the API tier and then to the database tier. Traffic should not flow between the web and the database tier. How should CloudLabs configure the network with minimal steps?

- Options:

- A. Add each tier to a different subnetwork.

- B. Set up software-based firewalls on individual VMs.

- C. Add tags to each tier and set up routes to allow the desired traffic flow.

- D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

Question 15:

- Scenario: CloudLabs is designing a large distributed application with 30 microservices, and each microservice needs to connect to a database backend. What should CloudLabs do to store credentials securely?

- Options:

- A. In the source code.

- B. In an environment variable.

- C. In a secret management system.

- D. In a config file that has restricted access through ACLs.


Answers:

Question 1:

- Scenario: CloudLabs needs to improve access latency and provide security against distributed denial-of-service (DDoS) attacks on their public health information website hosted in an on-premises data center. They also plan to migrate this website to Google Cloud. What should CloudLabs do?

- Options:

- A. Deploy an external HTTP(S) load balancer, configure VPC firewall rules, and move the applications onto Compute Engine virtual machines.

- B. Deploy an external HTTP(S) load balancer, configure Google Cloud Armor, and move the application onto Compute Engine virtual machines.

- C. Containerize the application and move it into Google Kubernetes Engine (GKE). Create a GKE service to expose the pods within the cluster and set up a GKE network policy.

- D. Containerize the application and move it into Google Kubernetes Engine (GKE). Create an internal load balancer to expose the pods outside the cluster and configure Identity-Aware Proxy (IAP) for access.

Answer: B. Deploy an external HTTP(S) load balancer, configure Google Cloud Armor, and move the application onto Compute Engine virtual machines.

Explanation: To address the latency and DDoS attack issues, CloudLabs should use a Google Cloud Load Balancer with Google Cloud Armor. This setup enhances website performance and security. Moving the application to Compute Engine virtual machines on Google Cloud ensures better scalability and reliability. Containerization and Kubernetes might be beneficial in different scenarios, but they are not the primary solutions for these specific issues.


Question 2:

- Scenario: CloudLabs wants to connect one of their data centers to Google Cloud, and they are over 100 kilometers from a Google-owned point of presence. They can't afford new hardware, and their existing firewall can accommodate future throughput growth. What should CloudLabs recommend?

- Options:

- A. Provision Carrier Peering.

- B. Provision a new Internet connection.

- C. Provision a Partner Interconnect connection.

- D. Provision a Dedicated Interconnect connection.

Answer: C. Provision a Partner Interconnect connection.

Explanation: When CloudLabs is distant from a Google-owned point of presence, doesn't want to invest in new hardware, and needs to ensure future throughput growth, Partner Interconnect is the most suitable option. It allows them to connect to Google Cloud through a supported service provider without requiring dedicated hardware.


Question 3:

- Scenario: CloudLabs has a healthcare customer with stringent data security requirements. They need to ensure that patient information stored in Cloud Storage buckets does not leave the geographic region where the buckets are hosted. What should CloudLabs do?

- Options:

- A. Encrypt the data in the application on-premises before the data is stored in the "asia-southeast1" region.

- B. Enable Virtual Private Cloud Service Controls and create a service perimeter around the Cloud Storage resources.

- C. Assign the Identity and Access Management (IAM) "storage.objectViewer" role only to users and service accounts that need to use the data.

- D. Create an access control list (ACL) that limits access to the bucket to authorized users only and apply it to the buckets in the "asia-southeast1" region.

Answer: B. Enable Virtual Private Cloud Service Controls and create a service perimeter around the Cloud Storage resources.

Explanation: To ensure that patient information stored in Cloud Storage buckets within a specific geographic region doesn't leave that area, CloudLabs should use Virtual Private Cloud (VPC) Service Controls. By enabling VPC Service Controls and creating a service perimeter around the Cloud Storage resources, they can enforce constraints on data access and movement while maintaining access controls and security. The other options may provide some security but don't address the geographic data locality requirement.


Question 4:

- Scenario: CloudLabs' sales employees need to access web-based sales tools located in their data center from remote locations. They are retiring their current VPN infrastructure and want to move to a BeyondCorp access model. What should CloudLabs do?

- Options:

- A. Create an Identity-Aware Proxy (IAP) connector that points to the sales tool application.

- B. Create a Google group for the sales tool application and upgrade that group to a security group.

- C. Deploy an external HTTP(S) load balancer and create a custom Cloud Armor policy for the sales tool application.

- D. Give their Google Workspace user accounts the predefined AppEngine Viewer role for access to the sales tool application.

Answer: A. Create an Identity-Aware Proxy (IAP) connector that points to the sales tool application.

Explanation: To implement the BeyondCorp access model and provide secure remote access to web-based sales tools, CloudLabs should use Identity-Aware Proxy (IAP). Creating an IAP connector for the sales tool application allows for context-aware access controls and ensures security. The other options do not align with the BeyondCorp model or provide the same level of security.

Question 5:

- Scenario: CloudLabs must protect customer personally identifiable information (PII) and wants to generate anonymized usage reports about their new game while deleting PII data after a specific period. What should CloudLabs do?

- Options:

- A. Archive audit logs in Cloud Storage and manually generate reports.

- B. Write a Cloud Logging filter to export specific date ranges to Pub/Sub.

- C. Archive audit logs in BigQuery and generate reports using Google Data Studio.

- D. Archive user logs on a locally attached persistent disk and cat them to a text file for auditing.

Answer: C. Archive audit logs in BigQuery and generate reports using Google Data Studio.

Explanation: To ensure data privacy, generate anonymized usage reports, and delete PII data after a specific period, CloudLabs should use BigQuery to archive the audit logs. BigQuery allows efficient data analysis and anonymization. Google Data Studio can then be used to generate reports based on the archived data. The other options may not provide the same level of data processing and reporting capabilities.

Question 6:

- Scenario: CloudLabs wants to verify that their new gaming platform follows Google best practices for security and provides the necessary metrics for the operations teams. What should CloudLabs do?

- Options:

- A. Ensure that you aren’t running privileged containers.

- B. Ensure that you are using obfuscated Tags on workloads.

- C. Ensure that you are using the native logging mechanisms.

- D. Ensure that workloads are not using securityContext to run as a group.

- E. Ensure that each cluster is running GKE metering so each team can be charged for their usage.

Answer: A and C. Ensure that you aren't running privileged containers and ensure that you are using the native logging mechanisms.

Explanation: CloudLabs should ensure that their containers do not run with unnecessary privileges (option A) and that they are using the native logging mechanisms (option C) provided by Google Kubernetes Engine (GKE) to adhere to best practices. Privileged containers and custom logging mechanisms can introduce security and operational risks. The other options do not directly address these concerns or best practices.

Question 7:

- Scenario: CloudLabs needs to implement Virtual Private Cloud (VPC) Service Controls, allowing Cloud Shell usage by their developers while restricting their access to managed services. What should CloudLabs do?

- Options:

- A. Use VPC Service Controls for the entire platform.

- B. Prioritize VPC Service Controls implementation over Cloud Shell usage for the entire platform.

- C. Include all developers in an access level associated with the service perimeter and allow them to use Cloud Shell.

- D. Create a service perimeter around only the projects that handle sensitive data and do not grant developers access to it.

Answer: D. Create a service perimeter around only the projects that handle sensitive data and do not grant developers access to it.

Explanation: To balance security and business requirements, CloudLabs should create a service perimeter (option D) specifically around the projects that handle sensitive data. By not granting developers access to this service perimeter, Cloud Shell can be used by developers while still enforcing security controls. The other options do not provide the same level of granularity and control over Cloud Shell usage.

Question 8:

- Scenario: CloudLabs is preparing for a public beta launch of their new game and wants to establish meaningful Service Level Objectives (SLOs) before the launch. What should CloudLabs do?

- Options:

- A. Define one SLO as 99.9% game server availability. Define the other SLO as less than 100-ms latency.

- B. Define one SLO as service availability that is the same as Google Cloud's availability. Define the other SLO as 100-ms latency.

- C. Define one SLO as 99% HTTP requests return the 2xx status code. Define the other SLO as 99% requests return within 100 ms.

- D. Define one SLO as total uptime of the game server within a week. Define the other SLO as the mean response time of all HTTP requests that are less than 100 ms.

Answer: A and C. Define one SLO as 99.9% game server availability and Define the other SLO as 99% HTTP requests return the 2xx status code.

Explanation: CloudLabs should set SLOs that are relevant to their game's performance and availability. This means specifying both server availability (option A) and the success rate of HTTP requests (option C). These metrics directly reflect the quality of the game and its user experience. Options B and D do not provide SLOs relevant to the game's performance and may not align with business goals.

Question 10:

- Scenario: CloudLabs needs to bring existing recorded video content to new fans in emerging regions. What should CloudLabs do?

- Options:

- A. Serve the video content directly from a multi-region Cloud Storage bucket.

- B. Use Cloud CDN to cache the video content from CloudLabs' existing public cloud provider.

- C. Use Apigee Edge to cache the video content from CloudLabs' existing public cloud provider.

- D. Replicate the video content in Google Kubernetes Engine clusters in regions close to the fans.

Answer: B. Use Cloud CDN to cache the video content from CloudLabs' existing public cloud provider.

Explanation: CloudLabs should use Google Cloud CDN to cache the video content from their existing public cloud provider. This approach provides low-latency access to the content and reduces the load on the origin server, which is beneficial when serving content to users in emerging regions. Replicating content in GKE clusters (option D) introduces unnecessary complexity and may not be cost-effective.

Question 11:

- Scenario: CloudLabs is the data compliance officer for TerramEarth and needs to protect customer personally identifiable information (PII). They want to personalize product recommendations for their industrial customers. What should CloudLabs do?

- Options:

- A. Use AutoML to provide data to the recommendation service.

- B. Process PII data on-premises to keep the private information more secure.

- C. Use the Cloud Data Loss Prevention (DLP) API to provide data to the recommendation service.

- D. Manually build, train, and test machine learning models to provide product recommendations anonymously.

Answer: C. Use the Cloud Data Loss Prevention (DLP) API to provide data to the recommendation service.

Explanation: CloudLabs should use the Cloud DLP API to protect PII data while providing it to the recommendation service. This ensures data privacy and security. The other options do not provide the same level of protection and compliance.

Question 12:

- Scenario: CloudLabs is designing a hybrid environment with network connectivity between Google Cloud and their on-premises environment. They want to ensure compatibility with their on-premises networking. What should CloudLabs do?

- Options:

- A. Use the default VPC in your Google Cloud project. Use a Cloud VPN connection between your on-premises environment and Google Cloud.

- B. Create a custom VPC in Google Cloud in auto mode. Use a Cloud VPN connection between your on-premises environment and Google Cloud.

- C. Create a network plan for your VPC in Google Cloud that uses CIDR ranges that overlap with your on-premises environment. Use a Cloud Interconnect connection between your on-premises environment and Google Cloud.

- D. Create a network plan for your VPC in Google Cloud that uses non-overlapping CIDR ranges with your on-premises environment. Use a Cloud Interconnect connection between your on-premises environment and Google Cloud.

Answer: D. Create a network plan for your VPC in Google Cloud that uses non-overlapping CIDR ranges with your on-premises environment. Use a Cloud Interconnect connection between your on-premises environment and Google Cloud.

Explanation:

To ensure compatibility between Google Cloud and the on-premises environment, CloudLabs should create a VPC in Google Cloud with non-overlapping CIDR ranges (option D). Using Cloud Interconnect is the right choice for establishing network connectivity with the on-premises environment. Options A and B involve VPN connections, which may not meet all the requirements.

Question 13:

- Scenario: CloudLabs wants to support a sensor network with motion sensors in 1000 meeting rooms across multiple offices on different continents. Each room sends data every second, and connectivity can be inconsistent. What solution should CloudLabs design?

- Options:

- A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

- B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device-specific table.

- C. Have devices poll for connectivity to Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

- D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Datastore.

Answer: C. Have devices poll for connectivity to Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

Explanation: To support a sensor network with inconsistent connectivity, CloudLabs should use Pub/Sub. It allows devices to publish messages when they have connectivity, and other applications can subscribe to these messages when needed. Pub/Sub provides a reliable and asynchronous messaging mechanism. The other options may not be as efficient for handling inconsistent connectivity.

Question 14:

- Scenario: CloudLabs needs to archive approximately 100 TB of log data to the cloud and use serverless analytics features while retaining the data as a long-term disaster recovery backup. What should CloudLabs do?

- Options:

- A. Load logs into BigQuery.

- B. Load logs into Cloud SQL.

- C. Import logs into Cloud Logging.

- D. Insert logs into Cloud Bigtable.

- E. Upload log files into Cloud Storage.

Answer: A and E. Load logs into BigQuery and Upload log files into Cloud Storage.

Explanation: To achieve the goal of archiving log data for serverless analytics and long-term disaster recovery, CloudLabs should load the logs into BigQuery (option A) for serverless analytics and upload the log files into Cloud Storage (option E) for long-term retention and disaster recovery. The other options are not suitable for these specific use cases.

Question 15:

- Scenario: CloudLabs set up an autoscaling managed instance group to serve web traffic for an upcoming launch. They notice that virtual machine instances are being terminated and relaunched every minute without public IP addresses. They verified that the appropriate web response is coming from each instance. What should CloudLabs do to ensure the backend is configured correctly?

- Options:

- A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.

- B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.

- C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

- D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

Answer: C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

Explanation: To ensure that instances are not terminated and relaunched unnecessarily, CloudLabs should ensure that a firewall rule exists to allow load balancer health checks (option C) to reach the instances in the instance group. This will help the load balancer accurately determine the health of the instances. The other options do not address this specific issue.

I hope you answered all correctly.

Mariusz (Mario) Dworniczak, PMP

Senior Technical Program Manager IT Infrastructure and Cloud ?? Project Management, Cloud, AI, Cybersecuirty, Leadership. ???? Multi-Cloud (AWS | GCP | Azure) Architect. I speak: ????????????

1 年
回复
Ewa Kowalska

IT Project Manager & Department Team Leader @ Kyndryl

1 年

Piotr ?ak you should check it :)

要查看或添加评论,请登录

Mariusz (Mario) Dworniczak, PMP的更多文章

社区洞察

其他会员也浏览了