• support@dumpspool.com

SPECIAL LIMITED TIME DISCOUNT OFFER. USE DISCOUNT CODE TO GET 20% OFF DP2021

PDF Only

Dumpspool PDF book

$35.00 Free Updates Upto 90 Days

  • Professional-Cloud-Architect Dumps PDF
  • 275 Questions
  • Updated On October 04, 2024

PDF + Test Engine

Dumpspool PDF and Test Engine book

$60.00 Free Updates Upto 90 Days

  • Professional-Cloud-Architect Question Answers
  • 275 Questions
  • Updated On October 04, 2024

Test Engine

Dumpspool Test Engine book

$50.00 Free Updates Upto 90 Days

  • Professional-Cloud-Architect Practice Questions
  • 275 Questions
  • Updated On October 04, 2024
Check Our Free Google Professional-Cloud-Architect Online Test Engine Demo.

How to pass Google Professional-Cloud-Architect exam with the help of dumps?

DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Google Professional-Cloud-Architect Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.

How Do I Know Google Professional-Cloud-Architect Dumps are Worth it?

Did we mention our latest Professional-Cloud-Architect Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.

You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Google Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!

IT Students Are Using our Google Certified Professional - Cloud Architect (GCP) Dumps Worldwide!

It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using Google Certified Professional - Cloud Architect (GCP) Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.

How to Get Professional-Cloud-Architect Real Exam Dumps?

Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the Professional-Cloud-Architect exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!

Google Professional-Cloud-Architect Exam Overview:

Aspect Details
Exam Name Google Professional Cloud Architect
Exam Cost $200 USD
Total Time 2 hours
Available Languages English, Japanese, Spanish, Portuguese
Passing Marks 80%
Exam Format Multiple choice and multiple select
Exam Provider Google Cloud
Prerequisites None
Certification Validity 2 years

Google Certified Professional - Cloud Architect Exam Topics Breakdown

Domain Percentage Description
Designing and Planning a Cloud Solution 35% Designing and planning cloud solution architectures
Managing and Provisioning a Solution Infrastructure 20% Managing and provisioning solution infrastructure
Designing for Security and Compliance 20% Designing for security and compliance
Analyzing and Optimizing Technical and Business Processes 15% Analyzing and optimizing technical and business processes
Managing Implementation 10% Managing implementation

Frequently Asked Questions

Google Professional-Cloud-Architect Sample Question Answers

Question # 1

You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoringworkspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. Whatshould you do?

A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then addmetrics and create alert policies.
B. Navigate the predefined dashboards in the Cloud Monitoring workspace, create custommetrics, and install alerting software on a Compute Engine instance.
C. Write a shell script that gathers metrics from GKE nodes, publish these metrics to aPub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard.
D. Create a custom dashboard in the Cloud Monitoring workspace for each incident, andthen add metrics and create alert policies.

Question # 2

You are designing a Data Warehouse on Google Cloud and want to store sensitive data inBigQuery. Your company requires you to generate encryption keys outside of GoogleCloud. You need to implement a solution. What should you do?

A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data inCloud Storage using the customer-managed key option and select the created key. Set upa Dataflow pipeline to decrypt the data and to store it in a BigQuery dataset.
B. Generate a new key in Cloud Key Management Service (Cloud KMS). Create a datasetin BigQuery using the customer-managed key option and select the created key
C. Import a key in Cloud KMS. Store all data in Cloud Storage using the customermanagedkey option and select the created key. Set up a Dataflow pipeline to decrypt thedata and to store it in a new BigQuery dataset.
D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-suppliedkey option and select the created key.

Question # 3

Your team is developing a web application that will be deployed on Google KubernetesEngine (GKE). Your CTO expects a successful launch and you need to ensure yourapplication can handle the expected load of tens of thousands of users. You want to testthe current deployment to ensure the latency of your application stays below a certainthreshold. What should you do?

A. Use a load testing tool to simulate the expected number of concurrent users and totalrequests to your application, and inspect the results.
B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on yourapplication deployments. Send curl requests to your application, and validate if the autoscaling works.
C. Replicate the application over multiple GKE clusters in every Google Cloud region.Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address.
D. Use Cloud Debugger in the development environment to understand the latencybetween the different microservices.

Question # 4

An application development team has come to you for advice.They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very unpredictableworkload and must remain reliable during peaks in traffic. They want to minimizeoperational overhead for this application. What approach should you recommend?

A. Use a Managed Instance Group when deploying to Compute Engine
B. Develop an application with containers, and deploy to Google Kubernetes Engine (GKE)
C. Develop the application for App Engine standard environment
D. Develop the application for App Engine Flexible environment using a custom runtime

Question # 5

Your company has a Google Cloud project that uses BlgQuery for data warehousing Thereare some tables that contain personally identifiable information (PI!) Only the complianceteam may access the PH. The other information in the tables must be available to the datascience team. You want to minimize cost and the time it takes to assign appropriate accessto the tables What should you do?

A. 1 From the dataset where you have the source data, create views of tables that youwant to share, excluding Pll2 Assign an appropriate project-level IAM role to the members of the data science team3 Assign access controls to the dataset that contains the view
B. 1 From the dataset where you have the source data, create materialized views of tablesthat you want to share excluding Pll2 Assign an appropriate project-level IAM role to the members of the data science team 3.Assign access controls to the dataset that contains the view.
C. 1 Create a dataset for the data science team2 Create views of tables that you want to share excluding Pll3 Assign an appropriate project-level IAM role to the members of the data science team4 Assign access controls to the dataset that contains the view5 Authorize the view to access the source dataset
D. 1. Create a dataset for the data science team.2. Create materialized views of tables that you want to share, excluding Pll3. Assign an appropriate project-level IAM role to the members of the data science team4 Assign access controls to the dataset that contains the view5 Authorize the view to access the source dataset

Question # 6

You want to allow your operations learn to store togs from all the production protects inyour Organization, without during logs from other projects All of the production projects arecontained in a folder. You want to ensure that all logs for existing and new productionprojects are captured automatically. What should you do?

A. Create an aggregated export on the Production folder. Set the log sink to be a CloudStorage bucket in an operations project
B. Create an aggregated export on the Organization resource. Set the tog sink to be aCloud Storage bucket in an operations project.
C. Create log exports in the production projects. Set the log sinks to be a Cloud Storage bucket in an operations project.
D. Create tog exports in the production projects. Set the tog sinks to be BigQuery datasetsin the production projects and grant IAM access to the operations team to run queries onthe datasets

Question # 7

Your company has a support ticketing solution that uses App Engine Standard. The projectthat contains the App Engine application already has a Virtual Private Cloud(VPC) networkfullyconnected to the company’s on-premises environment through a Cloud VPN tunnel. Youwant to enable App Engine application to communicate with a database that is running in the company’s on-premises environment. What should you do?

A. Configure private services access
B. Configure private Google access for on-premises hosts only
C. Configure serverless VPC access
D. Configure private Google access

Question # 8

Your company is using Google Cloud. You have two folders under the Organization:Finance and Shopping. The members of the development team are in a Google Group.The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from creating resources inprojects in the Finance folder. What should you do?

A. Assign the development team group the Project Viewer role on the Finance folder, andassign the development team group the Project Owner role on the Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, andremove the development team group Project Owner role from the Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.

Question # 9

Your company uses the Firewall Insights feature in the Google Network Intelligence Center.You have several firewall rules applied to Compute Engine instances. You need to evaluatethe efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page inthe Google Cloud Console, you notice that there are no log rows to display. What shouldyou do to troubleshoot the issue?

A. Enable Virtual Private Cloud (VPC) flow logging.
B. Enable Firewall Rules Logging for the firewall rules you want to monitor.C. Verify that your user account is assigned the compute.networkAdmin Identity andAccess Management (IAM) role.
D. Install the Google Cloud SDK, and verify that there are no Firewall logs in the commandline output.

Question # 10

Your company is running its application workloads on Compute Engine. The applicationshave been deployed in production, acceptance, and development environments. Theproduction environment is business-critical and is used 24/7, while the acceptance anddevelopment environments are only critical during office hours. Your CFO has asked you tooptimize these environments to achieve cost savings during idle times. What should youdo?

A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours.Schedule the shell script on one of the production instances to automate the task.
B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development andacceptance environments after office hours and start them just before office hours.
C. Deploy the development and acceptance applications on a managed instance group andenable autoscaling.
D. Use regular Compute Engine instances for the production environment, and usepreemptible VMs for the acceptance and development environments.

Question # 11

You are implementing the infrastructure for a web service on Google Cloud. The webservice needs to receive and store the data from 500,000 requests per second. The datawill be queried later in real time, based on exact matches of a known set of attributes.There will be periods where the web service will not receive any requests. The businesswants to keep costs low. Which web service platform and database should you use for theapplication?

A. Cloud Run and BigQuery
B. Cloud Run and Cloud Bigtable
C. A Compute Engine autoscaling managed instance group and BigQuery
D. A Compute Engine autoscaling managed instance group and Cloud Bigtable

Question # 12

Your company has a Google Workspace account and Google Cloud Organization Somedevelopers in the company have created Google Cloud projects outside of the GoogleCloud OrganizationYou want to create an Organization structure that allows developers to create projects, butprevents them from modifying production projects You want to manage policies for allprojects centrally and be able to set more restrictive policies for production projectsYou want to minimize disruption to users and developers when business needs change inthe future You want to follow Google-recommended practices How should you design theOrganization structure?

A. 1 Create a second Google Workspace account and Organization2 Grant all developers the Project Creator IAM role on the new Organization3 Move the developer projects into the new Organization4 Set the policies for all projects on both Organizations.5 Additionally set the production policies on the original Organization
B. 1 Create a folder under the Organization resource named "Production '2 Grant all developers the Project Creator IAM role on the Organization 3. Move thedeveloper projects into the Organization4 Set the policies for all projects on the Organization5 Additionally set the production policies on the 'Production" folder
C. 1 Create folders under the Organization resource named "Development" andProduction'2 Grant all developers the Project Creator IAM role on the ""Development1 folder 3. Movethe developer projects into the "Development" folder 4 Set the policies for all projects on the Organization5 Additionally set the production policies on the "Production" folder
D. 1 Designate the Organization for production projects only2 Ensure that developers do not have the Project Creator IAM role on the Organization3 Create development projects outside of the Organization using the developer GoogleWorkspace accounts4 Set the policies for all projects on the Organization5 Additionally set the production policies on the individual production projects

Question # 13

You are managing several projects on Google Cloud and need to interact on a daily basiswith BigQuery, Bigtable and Kubernetes Engine using the gcloud CLI tool You aretravelling a lot and work on different workstations during the week You want to avoid havingto manage the gcloud CLI manually What should you do?

A. Use a package manager to install gcloud on your workstations instead of installing itmanually
B. Create a Compute Engine instance and install gcloud on the instance Connect to thisinstance via SSH to always use the samegcloud installation when interacting with Google Cloud
C. Install gcloud on all of your workstations Run the command gcloud components autoupdateon each workstation
D. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud

Question # 14

Your company has developed a monolithic, 3-tier application to allow external users toupload and share files. The solution cannot be easily enhanced and lacks reliability. Thedevelopment team would like to re-architect the application to adopt microservices and afully managed service approach, but they need to convince their leadership that the effort isworthwhile. Which advantage(s) should they highlight to leadership?

A. The new approach will be significantly less costly, make it easier to manage theunderlying infrastructure, and automatically manage the CI/CD pipelines.
B. The monolithic solution can be converted to a container with Docker. The generatedcontainer can then be deployed into a Kubernetes cluster.
C. The new approach will make it easier to decouple infrastructure from application,develop and release new features, manage the underlying infrastructure, manage CI/CDpipelines and perform A/B testing, and scale the solution if necessary.
D. The process can be automated with Migrate for Compute Engine.

Question # 15

You need to migrate Hadoop jobs for your company’s Data Science team without modifyingthe underlying infrastructure. You want to minimize costs and infrastructure managementeffort. What should you do?

A. Create a Dataproc cluster using standard worker instances.
B. Create a Dataproc cluster using preemptible worker instances.
C. Manually deploy a Hadoop cluster on Compute Engine using standard instances.
D. Manually deploy a Hadoop cluster on Compute Engine using preemptible instances.

Question # 16

Your organization has decided to restrict the use of external IP addresses on instances toonly approved instances. You want to enforce this requirement across all of your VirtualPrivate Clouds (VPCs). What should you do?

A. Remove the default route on all VPCs. Move all approved instances into a new subnetthat has a default route to an internet gateway.
B. Create a new VPC in custom mode. Create a new subnet for the approved instances,and set a default route to the internet gateway on this new subnet.
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D. Set an Organization Policy with a constraint onconstraints/compute.vmExternalIpAccess. List the approved instances in theallowedValues list.

Question # 17

You are managing an application deployed on Cloud Run for Anthos, and you need todefine a strategy for deploying new versions of the application. You want to evaluate thenew code with a subset of production traffic to decide whether to proceed with the rollout.What should you do?

A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentagebetween revisions.
B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancinginstance in front of both services.
C. In the Google Cloud Console page for Cloud Run, set up continuous deployment usingCloud Build for the development branch. As part of the Cloud Build trigger, configure thesubstitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you wantdirected to a new version.
D. In the Google Cloud Console, configure Traffic Director with a new Service that points tothe new version of the application on Cloud Run. Configure Traffic Director to send a smallpercentage of traffic to the new version of the application.

Question # 18

Your company has an application running as a Deployment in a Google Kubernetes Engine(GKE) cluster When releasing new versions of the application via a rolling deployment, theteam has been causing outages The root cause of the outages is misconfigurations withparameters that are only used in production You want to put preventive measures for this inthe platform to prevent outages What should you do?

A. Configure liveness and readiness probes in the Pod specification
B. Configure an uptime alert in Cloud Monitoring
C. Create a Scheduled Task to check whether the application is available
D. Configure health checks on the managed instance group

Question # 19

Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs).There is a Compute Engine instance on each VPC. Network subnets do not overlap andmust remain separated. The network configuration is shown below. Instance #1 is an exception and must communicate directly with both Instance #2 andInstance #3 via internal IPs. How should you accomplish this?

A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.
B. Add two additional NICs to Instance #1 with the following configuration:•NIC1VPC: VPC #2SUBNETWORK: subnet #2•NIC2VPC: VPC #3SUBNETWORK: subnet #3Update firewall rules to enable traffic between instances.
C. Create two VPN tunnels via CloudVPN:•1 between VPC #1 and VPC #2.•1 between VPC #2 and VPC #3.Update firewall rules to enable traffic between the instances.
D. Peer all three VPCs:•Peer VPC #1 with VPC #2.•Peer VPC #2 with VPC #3.Update firewall rules to enable traffic between the instances.

Question # 20

Your company has just recently activated Cloud Identity to manage users. The GoogleCloud Organization has been configured as wed. The security learn needs to secureprotects that will be part of the Organization. They want to prohibit IAM users outside thedomain from gaining permissions from now on. What should they do?

A. Configure an organization policy to restrict identities by domain
B. Configure an organization policy to block creation of service accounts
C. Configure Cloud Scheduler o trigger a Cloud Function every hour that removes all usersthat don't belong to the Cloud identity domain from all projects.
D. Create a technical user (e g . crawler@yourdomain com), and give it the protect ownerrote at root organization level Write a bash script that• Lists all me IAM rules of all projects within the organization• Deletes all users that do not belong to the company domainCreate a Compute Engine instance m a project within the Organization and configuregcloud to be executed with technical user credentials Configure a cron job that executesthe bash script every hour.

Question # 21

Your company has an application deployed on Anthos clusters (formerly Anthos GKE) thatis running multiple microservices. The cluster has both Anthos Service Mesh and AnthosConfig Management configured. End users inform you that the application is respondingvery slowly. You want to identify the microservice that is causing the delay. What shouldyou do?

A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetrybetween the microservices.
B. Use Anthos Config Management to create a ClusterSelector selecting the relevantcluster. On the Google Cloud Console page for Google Kubernetes Engine, view theWorkloads and filter on the cluster. Inspect the configurations of the filtered workloads.
C. Use Anthos Config Management to create a namespaceSelector selecting the relevantcluster namespace. On the Google Cloud Console page for Google Kubernetes Engine,visit the workloads and filter on the namespace. Inspect the configurations of the filteredworkloads.
D. Reinstall istio using the default istio profile in order to collect request latency. Evaluatethe telemetry between the microservices in the Cloud Console.

Question # 22

You want to store critical business information in Cloud Storage buckets. The information isregularly changed but previous versions need to be referenced on a regular basis. Youwant to ensure that there is a record of all changes to any information in these buckets.You want to ensure that accidental edits or deletions can be easily roiled back. Whichfeature should you enable?

A. Bucket Lock
B. Object Versioning
C. Object change notification
D. Object Lifecycle Management

Question # 23

You team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly builtapplication that requires access to third-party services on the internet. Your company doesnot allow any Compute Engine instance to have a public IP address on Google Cloud. Youneed to create a deployment strategy that adheres to these guidelines. What should youdo?

A. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configureall workloads on GKE to pass through this proxy to access third-party services on theInternet
B. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway forthe cluster subnet
C. Configure the GKE cluster as a route-based cluster. Configure Private Google Accesson the Virtual Private Cloud (VPC)
D. Configure the GKE cluster as a private cluster. Configure Private Google Access on theVirtual Private Cloud (VPC)

Question # 24

You have developed a non-critical update to your application that is running in a managedinstance group, and have created a new instance template with the update that you want torelease. To prevent any possible impact to the application, you don't want to update anyrunning instances. You want any new instances that are created by the managed instancegroup to contain the new update. What should you do?

A. Start a new rolling restart operation.
B. Start a new rolling replace operation.
C. Start a new rolling update. Select the Proactive update mode.
D. Start a new rolling update. Select the Opportunistic update mode.

Question # 25

Your organization has stored sensitive data in a Cloud Storage bucket. For regulatoryreasons, your company must be able to rotate the encryption key used to encrypt the datain the bucket. The data will be processed in Dataproc. You want to follow Googlerecommendedpractices for security What should you do?

A. Create a key with Cloud Key Management Service (KMS) Encrypt the data using theencrypt method of Cloud KMS.
B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on thebucket to the Cloud KMS key.
C. Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypteddata to the bucket.
D. Generate an AES-256 encryption key. Encrypt the data in the bucket using thecustomer-supplied encryption keys feature.

Question # 26

You are deploying an application on App Engine that needs to integrate with an onpremisesdatabase. For security purposes, your on-premises database must not beaccessible through the public Internet. What should you do?

A. Deploy your application on App Engine standard environment and use App Enginefirewall rules to limit access to the open on-premises database.
B. Deploy your application on App Engine standard environment and use Cloud VPN tolimit access to the onpremises database.
C. Deploy your application on App Engine flexible environment and use App Engine firewallrules to limit access to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limitaccess to the on-premises database.

Question # 27

You need to deploy an application to Google Cloud. The application receives traffic viaTCP and reads and writes data to the filesystem. The application does not supporthorizontal scaling. The application process requires full control over the data on the filesystem because concurrent access causes corruption. The business is willing to accept adowntime when an incident occurs, but the application must be available 24/7 to supporttheir business operations. You need to design the architecture of this application on GoogleCloud.What should you do?

A. Use a managed instance group with instances in multiple zones, use Cloud Filestore,and use an HTTP load balancer in front of the instances.
B. Use a managed instance group with instances in multiple zones, use Cloud Filestore,and use a network load balancer in front of the instances.
C. Use an unmanaged instance group with an active and standby instance in differentzones, use a regional persistent disk, and use an HTTP load balancer in front of theinstances.
D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of theinstances.

Question # 28

Your company has a networking team and a development team. The development teamruns applications on Compute Engine instances that contain sensitive data. Thedevelopment team requires administrative permissions for Compute Engine. Your companyrequires all network resources to be managed by the networking team. The developmentteam does not want the networking team to have access to the sensitive data on theinstances. What should you do?

A. 1. Create a project with a standalone VPC and assign the Network Admin role to thenetworking team.2.Create a second project with a standalone VPC and assign the Compute Admin role tothe development team.3.Use Cloud VPN to join the two VPCs.
B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the NetworkAdmin role to the networking team, and assign the Compute Admin role to the developmentteam.
C. 1. Create a project with a Shared VPC and assign the Network Admin role to thenetworking team.2. Create a second project without a VPC, configure it as a Shared VPC service project,and assign the Compute Admin role to the development team.
D. 1. Create a project with a standalone VPC and assign the Network Admin role to thenetworking team. 2.Create a second project with a standalone VPC and assign the Compute Admin role tothe development team.3.Use VPC Peering to join the two VPCs.

Question # 29

Your company sends all Google Cloud logs to Cloud Logging. Your security team wants tomonitor the logs. You want to ensure that the security team can react quickly if an anomalysuch as an unwanted firewall change or server breach is detected. You want to followGoogle-recommended practices. What should you do?

A. Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs everyminute for the relevant events.
B. Export logs to BigQuery, and trigger a query in BigQuery to process the log data for therelevant events.
C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events.
D. Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant logevents.

Question # 30

Your company is designing its application landscape on Compute Engine. Whenever azonal outage occurs, the application should be restored in another zone as quickly aspossible with the latest application data. You need to design the solution to meet thisrequirement. What should you do?

A. Create a snapshot schedule for the disk containing the application data. Whenever azonal outage occurs, use the latest snapshot to restore the disk in the same zone.
B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outageoccurs, use the instance template to spin up the application in another zone in the sameregion. Use the regional persistent disk for the application data.
C. Create a snapshot schedule for the disk containing the application data. Whenever azonal outage occurs, use the latest snapshot to restore the disk in another zone within thesame region.
D. Configure the Compute Engine instances with an instance template for the application,and use a regional persistent disk for the application data. Whenever a zonal outageoccurs, use the instance template to spin up the application in another region. Use theregional persistent disk for the application data,

Question # 31

The operations team in your company wants to save Cloud VPN log events (or one yearYou need to configure the cloud infrastructure to save the logs What should you do?

A. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs
B. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart thatqueries for the VPN metrics over a one-year time period
C. Enable the Compute Engine API and then enable logging on the firewall rules thatmatch the traffic you want to save
D. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for thelogs you want to save

Question # 32

Your company is planning to upload several important files to Cloud Storage. After theupload is completed, they want to verify that the upload content is identical to what theyhave on- premises. You want to minimize the cost and effort of performing this check. Whatshould you do?

A.1) Use gsutil -m to upload all the files to Cloud Storage.2) Use gsutil cp to download the uploaded files3) Use Linux diff to compare the content of the files
B.1) Use gsutil -m to upload all the files to Cloud Storage.2) Develop a custom Java application that computes CRC32C hashes3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of theuploaded files4)Compare the hashes
C.1) Use Linux shasum to compute a digest of files you want to upload2) Use gsutil -m to upload all the files to the Cloud Storage3) Use gsutil cp to download the uploaded files4) Use Linux shasum to compute a digest of the downloaded files 5.Compre the hashes
D.1)Use gsutil -m to upload all the files to Cloud Storage.2)Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files3)Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploadedfiles4)Compare the hashes

Question # 33

Your company is developing a new application that will allow globally distributed users toupload pictures and share them with other selected users. The application will supportmillions of concurrent users. You want to allow developers to focus on just building codewithout having to create and maintain the underlying infrastructure. Which service shouldyou use to deploy the application?

A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine

Question # 34

You have deployed an application to Kubernetes Engine, and are using the Cloud SQLproxy container tomake the Cloud SQL database available to the services running on Kubernetes. You arenotified that theapplication is reporting database connection issues. Your company policies require a postmortem.What should you do?

A. Use gcloud sql instances restart.
B. Validate that the Service Account used by the Cloud SQL proxy container still has theCloud Build Editor role.
C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for KubernetesEngine and Cloud SQL.
D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubect1 torestart all pods.

Question # 35

Your company has sensitive data in Cloud Storage buckets. Data analysts have IdentityAccess Management (IAM) permissions to read the buckets. You want to prevent dataanalysts from retrieving the data in the buckets from outside the office network. Whatshould you do?

A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets.2. Create an access level with the CIDR of the office network.
B. 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network forsource range.2. Use the Classless Inter-domain Routing (CIDR) of the office network.
C. 1. Create a Cloud Function to remove IAM permissions from the buckets, and anotherCloud Function to add IAM permissions to the buckets.2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start ofbusiness and remove permissions at the end of business.
D. 1. Create a Cloud VPN to the office network.2. Configure Private Google Access for on-premises hosts.

Question # 36

Your company and one of its partners each nave a Google Cloud protect in separateorganizations. Your company s protect (prj-a) runs in Virtual Private Cloud (vpc-a). Thepartner's project (prj-b) runs in vpc-b. There are two instances running on vpc-a and oneinstance running on vpc-b Subnets denned in both VPCs are not overlapping. You need toensure that all instances communicate with each other via internal IPs minimizing latencyand maximizing throughput. What should you do?

A. Set up a network peering between vpc-a and vpc-b
B. Set up a VPN between vpc-a and vpc-b using Cloud VPN
C. Configure IAP TCP forwarding on the instance in vpc b and then launch the followinggcloud command from one of the instance in vpc-gcloud: 1. Create an additional instance in vpc-a2. Create an additional instance n vpc-b3. Instal OpenVPN in newly created instances4. Configure a VPN tunnel between vpc-a and vpc-b with the help of OpenVPN

Question # 37

Your company has an application running on multiple Compute Engine instances. Youneed to ensure that the application can communicate with an on-premises service thatrequires high throughput via internal IPs, while minimizing latency. What should you do?

A. Use OpenVPN to configure a VPN tunnel between the on-premises environment andGoogle Cloud.
B. Configure a direct peering connection between the on-premises environment andGoogle Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment andGoogle Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premisesenvironment and Google Cloud.

Question # 38

You are moving an application that uses MySQL from on-premises to Google Cloud. Theapplication will run on Compute Engine and will use Cloud SQL. You want to cut over to theCompute Engine deployment of the application with minimal downtime and no data loss toyour customers. You want to migrate the application with minimal modification. You alsoneed to determine the cutover strategy. What should you do?

A. 1. Set up Cloud VPN to provide private network connectivity between the ComputeEngine application and the on-premises MySQL server.2.Stop the on-premises application.3.Create a mysqldump of the on-premises MySQL server.4.Upload the dump to a Cloud Storage bucket.5.Import the dump into Cloud SQL.6.Modify the source code of the application to write queries to both databases and readfrom its local database.7.Start the Compute Engine application.8.Stop the on-premises application.
B. 1. Set up Cloud SQL proxy and MySQL proxy.2.Create a mysqldump of the on-premises MySQL server.3.Upload the dump to a Cloud Storage bucket.4.Import the dump into Cloud SQL.5.Stop the on-premises application.6.Start the Compute Engine application.
C. 1. Set up Cloud VPN to provide private network connectivity between the ComputeEngine application and the on-premises MySQL server.2.Stop the on-premises application.3.Start the Compute Engine application, configured to read and write to the on-premisesMySQL server.4.Create the replication configuration in Cloud SQL.5.Configure the source database server to accept connections from the Cloud SQL replica.6.Finalize the Cloud SQL replica configuration.7.When replication has been completed, stop the Compute Engine application. 8.Promote the Cloud SQL replica to a standalone instance.9.Restart the Compute Engine application, configured to read and write to the Cloud SQLstandalone instance.
D. 1. Stop the on-premises application.2.Create a mysqldump of the on-premises MySQL server.3.Upload the dump to a Cloud Storage bucket.4.Import the dump into Cloud SQL.5.Start the application on Compute Engine.

Question # 39

You are implementing a single Cloud SQL MySQL second-generation database thatcontains business-critical transaction data. You want to ensure that the minimum amount ofdata is lost in case of catastrophic failure. Which two features should you implement?(Choose two.)

A. Sharding
B. Read replicas
C. Binary logging
D. Automated backups
E. Semisynchronous replication

Question # 40

You are working at a sports association whose members range in age from 8 to 30. Theassociation collects a large amount of health data, such as sustained injuries. You arestoring this data in BigQuery. Current legislation requires you to delete such informationupon request of the subject. You want to design a solution that can accommodate such arequest. What should you do?

A. Use a unique identifier for each individual. Upon a deletion request, delete all rows fromBigQuery with this identifier.
B. When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result toData Catalog. Upon a deletion request, query Data Catalog to find the column withpersonal information.
C. Create a BigQuery view over the table that contains all data. Upon a deletion request,exclude the rows that affect the subject’s data from this view. Use this view instead of thesource table for all analysis tasks.
D. Use a unique identifier for each individual. Upon a deletion request, overwrite thecolumn with the unique identifier with a salted SHA256 of its value.

Question # 41

Your company has an application running on a deployment in a GKE cluster. You have aseparate cluster for development, staging and production. You have discovered that theteam is able to deploy a Docker image to the production cluster without first testing thedeployment in development and then staging. You want to allow the team to haveautonomy but want to prevent this from happening. You want a Google Cloud solution thatcan be implemented quickly with minimal effort. What should you do?

A. Create a Kubernetes admission controller to prevent the container from starting if it isnot approved for usage in the given environment
B. Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment
C. Implement a corporate policy to prevent teams from deploying Docker image to anenvironment unless the Docker image was tested in an earlier environment
D. Configure the binary authorization policies for the development, staging and productionclusters. Create attestations as part of the continuous integration pipeline”

Question # 42

Your company provides a recommendation engine for retail customers. You are providingretail customers with an API where they can submit a user ID and the API returns a list ofrecommendations for that user. You are responsible for the API lifecycle and want toensure stability for your customers in case the API makes backward-incompatible changes.You want to follow Google-recommended practices. What should you do?

A. Create a distribution list of all customers to inform them of an upcoming backwardincompatiblechange at least one month before replacing the old API with the new API.
B. Create an automated process to generate API documentation, and update the publicAPI documentation as part of the CI/CD process when deploying an update to the API.
C. Use a versioning strategy for the APIs that increases the version number on everybackward-incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix “DEPRECATED” to thecurrent API version number on every backward-incompatible change. Use the currentversion number for the new API.

Question # 43

Your company has announced that they will be outsourcing operations functions. You wantto allow developers to easily stage new versions of a cloud-based application in theproduction environment and allow the outsourced operations team to autonomouslypromote staged versions to production. You want to minimize the operational overhead ofthe solution. Which Google Cloud product should you migrate to?

A. App Engine
B. GKE On-Prem
C. Compute Engine
D. Google Kubernetes Engine

Question # 44

Your company has an application running on Compute Engine mat allows users to playtheir favorite music. There are a fixed number of instances Files are stored in CloudStorage and data is streamed directly to users. Users are reporting that they sometimesneed to attempt to play popular songs multiple times before they are successful. You needto improve the performance of the application. What should you do?

A.1. Copy popular songs into CloudSQL as a blob2. Update application code to retrieve data from CloudSQL when Cloud Storage isoverloaded
B.1. Create a managed instance group with Compute Engine instances2. Create a global toad balancer and configure ii with two backbends* Managed instance group * Cloud Storage bucket3. Enable Cloud CDN on the bucket backend
C.1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engineinstances2. Serve muse files directly from the backend Compute Engine instance
D.1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engineinstances2. Download popular songs in Cloud Filestore3. Serve music Wes directly from the backend Compute Engine instance

Question # 45

Your team will start developing a new application using microservices architecture onKubernetes Engine. As part of the development lifecycle, any code change that has beenpushed to the remote develop branch on your GitHub repository should be built and testedautomatically. When the build and test are successful, the relevant microservice will bedeployed automatically in the development environment. You want to ensure that all codedeployed in the development environment follows this process. What should you do?

A. Have each developer install a pre-commit hook on their workstation that tests the codeand builds the container when committing on the development branch. After a successfulcommit, have the developer deploy the newly built container image on the developmentcluster.
B. Install a post-commit hook on the remote git repository that tests the code and builds thecontainer when code is pushed to the development branch. After a successful commit,have the developer deploy the newly built container image on the development cluster.
C. Create a Cloud Build trigger based on the development branch that tests the code,builds the container, and stores it in Container Registry. Create a deployment pipeline thatwatches for new images and deploys the new image on the development cluster. Ensureonly the deployment tool has access to deploy new versions.
D. Create a Cloud Build trigger based on the development branch to build a new containerimage and store it in Container Registry. Rely on Vulnerability Scanning to ensure the codetests succeed. As the final step of the Cloud Build process, deploy the new container imageon the development cluster. Ensure only Cloud Build has access to deploy new versions.

Question # 46

Your company has just acquired another company, and you have been asked to integratetheir existing Google Cloud environment into your company’s data center. Uponinvestigation, you discover that some of the RFC 1918 IP ranges being used in the newcompany’s Virtual Private Cloud (VPC) overlap with your data center IP space. Whatshould you do to enable connectivity and make sure that there are no routing conflictswhen connectivity is established?

A. Create a Cloud VPN connection from the new VPC to the data center, create a CloudRouter, and apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create aCloud NAT instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a CloudRouter, and apply a custom route advertisement to block the overlapping IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply afirewall rule that blocks the overlapping IP space.

Question # 47

You need to deploy an application on Google Cloud that must run on a Debian Linuxenvironment. The application requires extensive configuration in order to operate correctly.You want to ensure that you can install Debian distribution updates with minimal manualintervention whenever they become available. What should you do?

A. Create a Compute Engine instance template using the most recent Debian image.Create an instance from this template, and install and configure the application as part ofthe startup script. Repeat this process whenever a new Google-managed Debian imagebecomes available.
B. Create a Debian-based Compute Engine instance, install and configure the application,and use OS patch management to install available updates.
C. Create an instance with the latest available Debian image. Connect to the instance viaSSH, and install and configure the application on the instance. Repeat this processwhenever a new Google-managed Debian image becomes available.
D. Create a Docker container with Debian as the base image. Install and configure theapplication as part of the Docker image creation process. Host the container on GoogleKubernetes Engine and restart the container whenever a new update is available.

Question # 48

You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2weeks, customers have reported that a specific part of the application returns errors veryfrequently. You currently have no logging or monitoring solution enabled on your GKEcluster. You want to diagnose the problem, but you have not been able to replicate theissue. You want to cause minimal disruption to the application. What should you do?

A. 1. Update your GKE cluster to use Cloud Operations for GKE.2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
B. 1. Create a new GKE cluster with Cloud Operations for GKE enabled.2.Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to thenew cluster.3.Use the GKE Monitoring dashboard to investigate logs from affected Pods.
C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.2. Set an alert to trigger whenever the application returns an error.
D. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deployPrometheus.2.Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to thenew cluster.3.Set an alert to trigger whenever the application returns an error.

Question # 49

Your company has an application running on Google Cloud that is collecting data fromthousands of physical devices that are globally distributed. Data is publish to Pub/Sub andstreamed in real time into an SSO Cloud Bigtable cluster via a Dataflow pipeline. Theoperations team informs you that your Cloud Bigtable cluster has a hot-spot and queriesare taking longer man expected You need to resolve the problem and prevent it fromhappening in the future What should you do?

A. Advise your clients to use HBase APIs instead of NodeJS APIs.
B. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet.
C. Delete records older than 30 days.
D. Double the number of nodes you currently have.

Question # 50

Your operations team currently stores 10 TB of data m an object storage service from athird-party provider. They want to move this data to a Cloud Storage bucket as quickly aspossible, following Google-recommended practices. They want to minimize the cost of thisdata migration. When approach should they use?

A. Use the gsutil mv command lo move the data
B. Use the Storage Transfer Service to move the data
C. Download the data to a Transfer Appliance and ship it to Google
D. Download the data to the on-premises data center and upload it to the Cloud Storage bucket

Question # 51

You company has a Kubernetes application that pulls messages from Pub/Sub and storesthem in Firestore. Because the application is simple, it was deployed as a single pod. Theinfrastructure team has analyzed Pub/Sub metrics and discovered that the applicationcannot process the messages in real time. Most of them wait for minutes before beingprocessed. You need to scale the elaboration process that is I/O-intensive. What shouldyou do?

A. Configure a Kubernetes autoscaling based on the subscription/push_request metric.
B. Use the –enable- autoscaling flag when you create the Kubernetes cluster
C. Configure a Kubernetes autoscaling based on the subscription/num_undeliveredmessage metric.
D. Use kubectl autoscale deployment APP_NAME –max 6 –min 2 –cpu- percent 50 toconfigure Kubernetes autoscaling deployment

Question # 52

Your company has just recently activated Cloud Identity to manage users. The GoogleCloud Organization has been configured as wed. The security learn needs to secureprotects that will be part of the Organization. They want to prohibit IAM users outside thedomain from gaining permissions from now on. What should they do?

A. Configure an organization policy to restrict identities by domain
B. Configure an organization policy to block creation of service accounts
C. Configure Cloud Scheduler to trigger a Cloud Function every hour that removes all usersthat don't belong to the Cloud identity domain from all projects.

Question # 53

Your company wants you to build a highly reliable web application with a few public APIsas the backend. You don’t expect a lot of user traffic, but traffic could spike occasionally.You want to leverage Cloud Load Balancing, and the solution must be cost-effective forusers. What should you do?

A. Store static content such as HTML and images in Cloud CDN. Host the APIs on AppEngine and store the user data in Cloud SQL.
B. Store static content such as HTML and images in a Cloud Storage bucket. Host theAPIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones,and save the user data in Cloud Spanner.
C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to hostthe APIs and save the user data in Cloud SQL.
D. Store static content such as HTML and images in a Cloud Storage bucket. Use CloudFunctions to host the APIs and save the user data in Firestore.

Question # 54

You are developing your microservices application on Google Kubernetes Engine. Duringtesting, you want to validate the behavior of your application in case a specific microserviceshould suddenly crash. What should you do?

A. Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice,configure a pod anti-affinity label that has the name of the tainted node as a value.
B. Use Istio’s fault injection on the particular microservice whose faulty behavior you wantto simulate.
C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior.
D. Configure Istio’s traffic management features to steer the traffic away from a crashingmicroservice.

Question # 55

You are working at a financial institution that stores mortgage loan approval documents onCloud Storage. Any change to these approval documents must be uploaded as a separateapproval file, so you want to ensure that these documents cannot be deleted or overwrittenfor the next 5 years. What should you do?

A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on theretention policy.
B. Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files.
C. Use a customer-managed key for the encryption of the bucket. Rotate the key after 5years.
D. Create the bucket with fine-grained access control, and grant a service account the roleof Object Writer. Use the service account to upload new files.

What our clients say about Professional-Cloud-Architect Dumps

Leave a comment

Your email address will not be published. Required fields are marked *

Rating / Feedback About This Exam