• support@dumpspool.com
SPECIAL LIMITED TIME DISCOUNT OFFER. USE DISCOUNT CODE TO GET 20% OFF DP2021

PDF Only

$35.00 Free Updates Upto 90 Days

  • SAP-C02 Dumps PDF
  • 435 Questions
  • Updated On March 16, 2024

PDF + Test Engine

$60.00 Free Updates Upto 90 Days

  • SAP-C02 Question Answers
  • 435 Questions
  • Updated On March 16, 2024

Test Engine

$50.00 Free Updates Upto 90 Days

  • SAP-C02 Practice Questions
  • 435 Questions
  • Updated On March 16, 2024
Check Our Free Amazon SAP-C02 Online Test Engine Demo.

How to pass Amazon SAP-C02 exam with the help of dumps?

DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Amazon SAP-C02 Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.

How Do I Know Amazon SAP-C02 Dumps are Worth it?

Did we mention our latest SAP-C02 Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.

You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Amazon Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!

IT Students Are Using our AWS Certified Solutions Architect - Professional Dumps Worldwide!

It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using AWS Certified Solutions Architect - Professional Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.

How to Get SAP-C02 Real Exam Dumps?

Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the SAP-C02 exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!

Amazon SAP-C02 Exam Overview:

Exam Detail Information
Exam Name AWS Certified Solutions Architect - Professional (SAP-C02)
Cost $300 USD
Available Languages English, Japanese, Korean, and Simplified Chinese
Duration 180 minutes (3 hours)
Format Multiple choice and multiple answer
Passing Score 750 out of 1000 points
Prerequisites Must hold an associate-level certification (e.g., AWS Certified Solutions Architect – Associate) and have two years of hands-on experience designing and deploying cloud architecture on AWS
Delivery Method Proctored exam delivered at a testing center or online with online proctoring
Exam Provider AWS (Amazon Web Services)
Renewal Period 3 years

Amazon SAP-C02 Exam Topics Breakdows

Domain Weight
Domain 1: Design for Organizational Complexity 12.5%
Domain 2: Design for New Solutions 29%
Domain 3: Migration Planning 14%
Domain 4: Cost Control 12.5%
Domain 5: Continuous Improvement for Existing Solutions 24%
Domain 6: Monitoring and Security 8%
Amazon SAP-C02 Sample Question Answers

Question # 1

A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWSaccount to a new AWS account in the same AWS Region. Both accounts are members ofthe same organization in AWS Organizations.The company must minimize database service interruption before the company performsDNS cutover to the new database.Which migration strategy will meet this requirement?

A. Take a snapshot of the existing Aurora database. Share the snapshot with the new AWSaccount. Create an Aurora DB cluster in the new account from the snapshot.
B. Create an Aurora DB cluster in the new AWS account. Use AWS Database MigrationService (AWS DMS) to migrate data between the two Aurora DB clusters.
C. Use AWS Backup to share an Aurora database backup from the existing AWS accountto the new AWS account. Create an Aurora DB cluster in the new AWS account from thesnapshot.
D. Create an Aurora DB cluster in the new AWS account. Use AWS Application MigrationService to migrate data between the two Aurora DB clusters.

Question # 2

A company is planning a migration from an on-premises data center to the AWS cloud. Thecompany plans to use multiple AWS accounts that are managed in an organization in AWSorganizations. The company will cost a small number of accounts initially and will addaccounts as needed. A solution architect must design a solution that turns on AWSaccounts.What is the MOST operationally efficient solution that meets these requirements.

A. Create an AWS Lambda function that creates a new cloudTrail trail in all AWS accountin the organization. Invoke the Lambda function dally by using a scheduled action inAmazon EventBridge.
B. Create a new CloudTrail trail in the organizations management account. Configure the trail to log all events for all AYYS accounts in the organization.
C. Create a new CloudTrail trail in all AWS accounts in the organization. Create new trailswhenever a new account is created.
D. Create an AWS systems Manager Automaton runbook that creates a cloud trail in allAWS accounts in the organization. Invoke the automation by using Systems Manager StateManager.

Question # 3

A solutions architect is preparing to deploy a new security tool into several previouslyunused AWS Regions. The solutions architect will deploy the tool by using an AWSCloudFormation stack set. The stack set's template contains an 1AM role that has acustom name. Upon creation of the stack set. no stack instances are created successfully.What should the solutions architect do to deploy the stacks successfully?

A. Enable the new Regions in all relevant accounts. Specify theCAPABILITY_NAMED_IAM capability during the creation of the stack set.
B. Use the Service Quotas console to request a quota increase for the number ofCloudFormation stacks in each new Region in all relevant accounts. Specify theCAPABILITYJAM capability during the creation of the stack set.
C. Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGEDpermissions model during the creation of the stack set.
D. Specify an administration role ARN and the CAPABILITYJAM capability during thecreation of the stack set.

Question # 4

A company has an loT platform that runs in an on-premises environment. The platformconsists of a server that connects to loT devices by using the MQTT protocol. The platformcollects telemetry data from the devices at least once every 5 minutes The platform alsostores device metadata in a MongoDB clusterAn application that is installed on an on-premises machine runs periodic jobs to aggregateand transform the telemetry and device metadata The application creates reports thatusers view by using another web application that runs on the same on-premises machineThe periodic jobs take 120-600 seconds to run However, the web application is alwaysrunning.The company is moving the platform to AWS and must reduce the operational overhead ofthe stack.Which combination of steps will meet these requirements with the LEAST operationaloverhead? (Select THREE.)

A. Use AWS Lambda functions to connect to the loT devices
B. Configure the loT devices to publish to AWS loT Core
C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare thereports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin toserve the reports
F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2instances to prepare the reports Use an ingress controller in the EKS cluster to serve thereports

Question # 5

A company is designing an AWS environment tor a manufacturing application. Theapplication has been successful with customers, and the application's user base hasincreased. The company has connected the AWS environment to the company's onpremisesdata center through a 1 Gbps AWS Direct Connect connection. The company hasconfigured BGP for the connection.The company must update the existing network connectivity solution to ensure that thesolution is highly available, fault tolerant, and secure.Which solution win meet these requirements MOST cost-effectively?

A. Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data intransit and provide resilience for the Direct Conned connection. Configure MACsec toencrypt traffic inside the Direct Connect connection.
B. Provision another Direct Conned connection between the company's on-premises datacenter and AWS to increase the transfer speed and provide resilience. Configure MACsecto encrypt traffic inside the Dried Conned connection.
C. Configure multiple private VIFs. Load balance data across the VIFs between the onpremisesdata center and AWS to provide resilience.
D. Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and toprovide resilience for the Direct Connect connection.

Question # 6

A company deploys workloads in multiple AWS accounts. Each account has a VPC withVPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log fileis compressed with gzjp compression. The company must retain the log files indefinitely.A security engineer occasionally analyzes the togs by using Amazon Athena to query theVPC flow logs. The query performance is degrading over time as the number of ingestedtogs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.Which solution will meet these requirements with the LARGEST performanceimprovement?

A. Create an AWS Lambda function to decompress the gzip flies and to compress the tileswith bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3event notification for the S3 bucket.
B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configurationto move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded
C. Update the VPC flow log configuration to store the files in Apache Parquet format.Specify Hourly partitions for the log files.
D. Create a new Athena workgroup without data usage control limits. Use Athena engineversion 2.

Question # 7

An e-commerce company is revamping its IT infrastructure and is planning to use AWSservices. The company's CIO has asked a solutions architect to design a simple, highlyavailable, and loosely coupled order processing application. The application is responsiblefor receiving and processing orders before storing them in an Amazon DynamoDB table.The application has a sporadic traffic pattern and should be able to scale during marketingcampaigns to process the orders with minimal delays.Which of the following is the MOST reliable approach to meet the requirements?

A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances toprocess them.
B. Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function toprocess them.
C. Receive the orders using the AWS Step Functions program and launch an Amazon ECScontainer to process them.
D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances toprocess them.

Question # 8

A company that is developing a mobile game is making game assets available in two AWSRegions. Game assets are served from a set of Amazon EC2 instances behind anApplication Load Balancer (ALB) in each Region. The company requires game assets to befetched from the closest Region. If game assess become unavailable in the closest Region,they should the fetched from the other Region. What should a solutions architect do to meet these requirement?

A. Create an Amazon CloudFront distribution. Create an origin group with one origin foreach ALB. Set one of the origins as primary.
B. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failoverrouting record pointing to the two ALBs. Set the Evaluate Target Health value Yes.
C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Createan Amazon Route 53 failover routing record pointing to the two CloudFront distributions.Set the Evaluate Target Health value to Yes.
D. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency aliasrecord pointing to the two ALBs. Set the Evaluate Target Health value to Yes.

Question # 9

A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors.Sensors send continuous data updates, and each update is less than 1 MB in size. Theagency has a fleet of on-premises application servers. These servers receive upda.es 'onthe sensors, convert the raw data into a human readable format, and write the results loanon-premises relational database server. Data analysts then use simple SOL queries tomonitor the data.The agency wants to increase overall application availability and reduce the effort that isrequired to perform maintenance tasks These maintenance tasks, which include updatesand patches to the application servers, cause downtime. While an application server isdown, data is lost from sensors because the remaining servers cannot handle the entireworkload.The agency wants a solution that optimizes operational overhead and costs. A solutionsarchitect recommends the use of AWS loT Core to collect the sensor data. What else should the solutions architect recommend to meet these requirements?

A. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to .csv format, and insert it into anAmazon Aurora MySQL DB instance. Instruct the data analysts to query the data directlyfrom the DB instance.
B. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to Apache Parquet format and save it toan Amazon S3 bucket. Instruct the data analysts to query the data by using AmazonAthena.
C. Send the sensor data to an Amazon Managed Service for Apache Flink {previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to .csv formatand store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DBinstance. Instruct the data analysts to query the data directly from the DB instance.
D. Send the sensor data to an Amazon Managed Service for Apache Flink (previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to ApacheParquet format and store it in an Amazon S3 bucket Instruct the data analysis to query thedata by using Amazon Athena.

Question # 10

A company has many services running in its on-premises data center. The data center isconnected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data issensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are usingAWS.Which solution will meet these requirements?

A. Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network LoadBalancer, and make the service available over DX.
B. Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind anApplication Load Balancer, and make the service available over DX.
C. Attach an internet gateway to the VPC. and ensure that network access control andsecurity group rules allow the relevant inbound and outbound traffic.
D. Attach a NAT gateway to the VPC. and ensue that network access control and securitygroup rules allow the relevant inbound and outbound traffic.

Question # 11

A company wants to establish a dedicated connection between its on-premisesinfrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connectconnection to its account VPC. The architecture includes a transit gateway and a DirectConnect gateway to connect multiple VPCs and the on-premises infrastructure.The company must connect to VPC resources over a transit VIF by using the DirectConnect connection.Which combination of steps will meet these requirements? (Select TWO.)

A. Update the 1 Gbps Direct Connect connection to 10 Gbps.
B. Advertise the on-premises network prefixes over the transit VIF.
C. Adverse the VPC prefixes from the Direct Connect gateway to the on-premises networkover the transit VIF.
D. Update the Direct Connect connection's MACsec encryption mode attribute to mustencrypt.
E. Associate a MACsec Connection Key Name-Connectivity Association Key (CKN/CAK)pair with the Direct Connect connection.

Question # 12

A company hosts an intranet web application on Amazon EC2 instances behind anApplication Load Balancer (ALB). Currently, users authenticate to the application againstan internal user database.The company needs to authenticate users to the application by using an existing AWSDirectory Service for Microsoft Active Directory directory. All users with accounts in thedirectory must have access to the application.Which solution will meet these requirements?

A. Create a new app client in the directory. Create a listener rule for the ALB. Specify theauthenticate-oidc action for the listener rule. Configure the listener rule with the appropriateissuer, client ID and secret, and endpoint details for the Active Directory service. Configurethe new app client with the callback URL that the ALB provides.
B. Configure an Amazon Cognito user pool. Configure the user pool with a federatedidentity provider (IdP) that has metadata from the directory. Create an app client. Associatethe app client with the user pool. Create a listener rule for the ALB. Specify theauthenticate-cognito action for the listener rule. Configure the listener rule to use the userpool and app client.
C. Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that hasan entity type of SAML 2.0 federation. Configure a role policy that allows access to theALB. Configure the new role as the default authenticated user role for the IdP. Create alistener rule for the ALB. Specify the authenticate-oidc action for the listener rule.
D. Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as anexternal identity provider (IdP) that uses SAML. Use the automatic provisioning method.Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a rolepolicy that allows access to the ALB. Attach the new role to all groups. Create a listenerrule for the ALB. Specify the authenticate-cognito action for the listener rule.

Question # 13

A public retail web application uses an Application Load Balancer (ALB) in front of AmazonEC2 instances running across multiple Availability Zones (AZs) in a Region backed by anAmazon RDS MySQL Multi-AZ deployment. Target group health checks are configured touse HTTP and pointed at the product catalog page. Auto Scaling is configured to maintainthe web fleet size based on the ALB health check.Recently, the application experienced an outage. Auto Scaling continuously replaced theinstances during the outage. A subsequent investigation determined that the web servermetrics were within the normal range, but the database tier was experiencing high toad,resulting in severely elevated query response times.Which of the following changes together would remediate these issues while improvingmonitoring capabilities for the availability and functionality of the entire application stack forfuture growth? (Select TWO.)

A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint inthe web application to reduce the load on the backend database tier.
B. Configure the target group health check to point at a simple HTML page instead of aproduct catalog page and the Amazon Route 53 health check against the product page toevaluate full application functionality. Configure Ama7on CloudWatch alarms to notifyadministrators when the site fails.
C. Configure the target group health check to use a TCP check of the Amazon EC2 webserver and the Amazon Route S3 health check against the product page to evaluate fullapplication functionality. Configure Amazon CloudWatch alarms to notify administratorswhen the site fails.
D. Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover ahigh-load, impaired RDS instance in the database tier.
E. Configure an Amazon Elastic ache cluster and place it between the web application andRDS MySQL instances to reduce the load on the backend database tier.

Question # 14

A company needs to implement disaster recovery for a critical application that runs in asingle AWS Region. The application's users interact with a web frontend that is hosted onAmazon EC2 Instances behind an Application Load Balancer (ALB). The application writesto an Amazon RD5 tor MySQL DB instance. The application also outputs processeddocuments that are stored in an Amazon S3 bucketThe company's finance team directly queries the database to run reports. During busyperiods, these queries consume resources and negatively affect application performance.A solutions architect must design a solution that will provide resiliency during a disaster.The solution must minimize data loss and must resolve the performance problems thatresult from the finance team's queries.Which solution will meet these requirements?

A. Migrate the database to Amazon DynamoDB and use DynamoDB global tables. Instructthe finance team to query a global table in a separate Region. Create an AWS Lambdafunction to periodically synchronize the contents of the original S3 bucket to a new S3bucket in the separate Region. Launch EC2 instances and create an ALB in the separateRegion. Configure the application to point to the new S3 bucket.
B. Launch additional EC2 instances that host the application in a separate Region. Add theadditional instances to the existing ALB. In the separate Region, create a read replica ofthe RDS DB instance. Instruct the finance team to run queries ageist the read replica. UseS3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 Docket in theseparate Region. During a disaster, promote the read replace to a standalone DB instance.Configure the application to point to the new S3 bucket and to the newly project readreplica.
C. Create a read replica of the RDS DB instance in a separate Region. Instruct the financeteam to run queries against the read replica. Create AMIs of the EC2 instances mat hostthe application frontend- Copy the AMIs to the separate Region. Use S3 Cross-RegionReplication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region.During a disaster, promote the read replica to a standalone DB instance. Launch EC2instances from the AMIs and create an ALB to present the application to end users.Configure the application to point to the new S3 bucket.
D. Create hourly snapshots of the RDS DB instance. Copy the snapshots to a separateRegion. Add an Amazon Elastic ache cluster m front of the existing RDS database. CreateAMIs of the EC2 instances that host the application frontend Copy the AMIs to the separateRegion. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3bucket in the separate Region. During a disaster, restore The database from the latestRDS snapshot. Launch EC2 Instances from the AMIs and create an ALB to present theapplication to end users. Configure the application to point to the new S3 bucket

Question # 15

A company wants to use Amazon Workspaces in combination with thin client devices toreplace aging desktops. Employees use the desktops to access applications that work withclinical trial data. Corporate security policy states that access to the applications must be restricted to only company branch office locations. The company is considering adding anadditional branch office in the next 6 months.Which solution meets these requirements with the MOST operational efficiency?

A. Create an IP access control group rule with the list of public addresses from the branchoffices. Associate the IP access control group with the Workspaces directory.
B. Use AWS Firewall Manager to create a web ACL rule with an IPSet with the list to publicaddresses from the branch office Locations-Associate the web ACL with the Workspacesdirectory.
C. Use AWS Certificate Manager (ACM) to issue trusted device certificates to the machinesdeployed in the branch office locations. Enable restricted access on the Workspacesdirectory.
D. Create a custom Workspace image with Windows Firewall configured to restrict accessto the public addresses of the branch offices. Use the image to deploy the Workspaces.

Question # 16

A software development company has multiple engineers who ate working remotely. Thecompany is running Active Directory Domain Services (AD DS) on an Amazon EC2instance. The company's security policy states that al internal, nonpublic services that aredeployed in a VPC must be accessible through a VPN. Multi-factor authentication (MFA)must be used for access to a VPN.What should a solutions architect do to meet these requirements?

A. Create an AWS Sire-to-Site VPN connection. Configure Integration between a VPN andAD DS. Use an Amazon Workspaces client with MFA support enabled to establish a VPNconnection.
B. Create an AWS Client VPN endpoint Create an AD Connector directory tor integrationwith AD DS. Enable MFA tor AD Connector. Use AWS Client VPN to establish a VPNconnection.
C. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub.Configure integration between AWS VPN CloudHub and AD DS. Use AWS Copilot toestablish a VPN connection.
D. Create an Amazon WorkLink endpoint. Configure integration between AmazonWorkLink and AD DS. Enable MFA in Amazon WorkLink. Use AWS Client VPN to establisha VPN connection.

Question # 17

A company needs to improve the reliability ticketing application. The application runs on anAmazon Elastic Container Service (Amazon ECS) cluster. The company uses AmazonCloudFront to servo the application. A single ECS service of the ECS cluster is theCloudFront distribution's origin.The application allows only a specific number of active users to enter a ticket purchasingflow. These users are identified by an encrypted attribute in their JSON Web Token (JWT).All other users are redirected to a waiting room module until there is available capacity forpurchasing.The application is experiencing high loads. The waiting room modulo is working asdesigned, but load on the waiting room is disrupting the application's availability. Thisdisruption is negatively affecting the application's ticket sale Transactions.Which solution will provide the MOST reliability for ticket sale transactions during periods ofhigh load? '

A. Create a separate service in the ECS cluster for the waiting room. Use a separatescaling configuration. Ensure that the ticketing service uses the JWT info-nation andappropriately forwards requests to the waring room service.
B. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Split the wailing room module into a pod that is separate from the ticketing pod. Make theticketing pod part of a StatefuISeL Ensure that the ticketing pod uses the JWT informationand appropriately forwards requests to the waiting room pod.
C. Create a separate service in the ECS cluster for the waiting room. Use a separatescaling configuration. Create a CloudFront function That inspects the JWT information andappropriately forwards requests to the ticketing service or the waiting room service
D. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Split the wailing room module into a pod that is separate from the ticketing pod. Use AWSApp Mesh by provisioning the App Mesh controller for Kubermetes. Enable mTLSauthentication and service-to-service authentication for communication between theticketing pod and the waiting room pod. Ensure that the ticketing pod uses The JWTinformation and appropriately forwards requests to the waiting room pod.

Question # 18

A company is currently in the design phase of an application that will need an RPO of lessthan 5 minutes and an RTO of less than 10 minutes. The solutions architecture team isforecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a database solution that will provide the company with the ability to failover to a secondary Region.Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serveas a backup in the event of a failure.
B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondaryRegion. In the event of a failure, promote the read replica to become the primary.
C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondaryRegion. Use AWS DMS to keep the secondary Region in sync.
D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event ofa failure, promote the read replica to become the primary.

Question # 19

A company is using an organization in AWS organization to manage AWS accounts. Foreach new project the company creates a new linked account. After the creation of a newaccount, the root user signs in to the new account and creates a service request to increase the service quota for Amazon EC2 instances. A solutions architect needs toautomate this process.Which solution will meet these requirements with tie LEAST operational overhead?

A. Create an Amazon EventBridge rule to detect creation of a new account Send the eventto an Amazon Simple Notification Service (Amazon SNS) topic that invokes an AWSLambda function. Configure the Lambda function to run the request-service-quota-increasecommand to request a service quota increase for EC2 instances.
B. Create a Service Quotas request template in the management account. Configure thedesired service quota increases for EC2 instances.
C. Create an AWS Config rule in the management account to set the service quota for EC2instances.
D. Create an Amazon EventBridge rule to detect creation of a new account. Send the eventto an Amazon simple Notification service (Amazon SNS) topic that involves an AWSLambda function. Configure the Lambda function to run the create-case command torequest a service quota increase for EC2 instances.

Question # 20

A company needs to gather data from an experiment in a remote location that does nothave internet connectivity. During the experiment, sensors that are connected to a totalnetwork will generate 6 TB of data in a preprimary formal over the course of 1 week. Thesensors can be configured to upload their data files to an FTP server periodically, but thesensors do not have their own FTP server. The sensors also do not support otherprotocols. The company needs to collect the data centrally and move lie data to objectstorage in the AWS Cloud as soon. as possible after the experiment.Which solution will meet these requirements?

A. Order an AWS Snowball Edge Compute Optimized device. Connect the device to thelocal network. Configure AWS DataSync with a target bucket name, and unload the dataover NFS to the device. After the experiment return the device to AWS so that the data canbe loaded into Amazon S3.
B. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the deviceto the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the deviceto AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS)volume.
C. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the deviceto the local network. Launch an Amazon EC2 instance on the device. Install and configurean FTP server on the EC2 instance. Configure the sensors to upload data to the EC2instance. After the experiment, return the device to AWS so that the data can be loadedinto Amazon S3.
D. Order an AWS Snowcone device. Connect the device to the local network. Configurethe device to use Amazon FSx. Configure the sensors to upload data to the device.Configure AWS DataSync on the device to synchronize the uploaded data with an AmazonS3 bucket Return the device to AWS so that the data can be loaded as an Amazon ElasticBlock Store (Amazon EBS) volume.

Question # 21

A company has Linux-based Amazon EC2 instances. Users must access the instances byusing SSH with EC2 SSH Key pairs. Each machine requires a unique EC2 Key pair.The company wants to implement a key rotation policy that will, upon request,automatically rotate all the EC2 key pairs and keep the key in a securely encrypted place.The company will accept less than 1 minute of downtime during key rotation.Which solution will meet these requirement?

A. Store all the keys in AWS Secrets Manager. Define a Secrets Manager rotationschedule to invoke an AWS Lambda function to generate new key pairs. Replace publicKeys on EC2 instances. Update the private keys in Secrets Manager.
B. Store all the keys in Parameter. Store, a capability of AWS Systems Manager, as astring. Define a Systems Manager maintenance window to invoke an AWS Lambdafunction to generate new key pairs. Replace public keys on EC2 instance. Update theprivate keys in parameter.
C. Import the EC2 key pairs into AWS Key Management Service (AWS KMS). Configureautomatic key rotation for these key pairs. Create an Amazon EventlBridge scheduled ruleto invoke an AWS Lambda function to initiate the key rotation AWS KMS.
D. Add all the EC2 instances to Feet Manager, a capability of AWS Systems Manager.Define a Systems Manager maintenance window to issue a Systems Manager RunCommand document to generate new Key pairs and to rotate public keys to all theinstances in Feet Manager.

Question # 22

A company has a Windows-based desktop application that is packaged and deployed to the users' Windows machines. The company recently acquired another company that hasemployees who primarily use machines with a Linux operating system. The acquiringcompany has decided to migrate and rehost the Windows-based desktop application loAWS.All employees must be authenticated before they use the application. The acquiringcompany uses Active Directory on premises but wants a simplified way to manage accessto the application on AWS (or all the employees.Which solution will rehost the application on AWS with the LEAST development effort?

A. Set up and provision an Amazon Workspaces virtual desktop for every employee.Implement authentication by using Amazon Cognito identity pools. Instruct employees torun the application from their provisioned Workspaces virtual desktops.
B. Create an Auto Scarlet group of Windows-based Ama7on EC2 instances. Join eachEC2 instance to the company's Active Directory domain. Implement authentication by usingthe Active Directory That is running on premises. Instruct employees to run the applicationby using a Windows remote desktop.
C. Use an Amazon AppStream 2.0 image builder to create an image that includes theapplication and the required configurations. Provision an AppStream 2.0 On-Demand fleetwith dynamic Fleet Auto Scaling process for running the image. Implement authenticationby using AppStream 2.0 user pools. Instruct the employees to access the application bystarling browse'-based AppStream 2.0 streaming sessions.
D. Refactor and containerize the application to run as a web-based application. Run theapplication in Amazon Elastic Container Service (Amazon ECS) on AWS Fargate with stepscaling policies Implement authentication by using Amazon Cognito user pools. Instruct theemployees to run the application from their browsers.

Question # 23

A company is developing an application that will display financial reports. The companyneeds a solution that can store financial Information that comes from multiple systems. Thesolution must provide the reports through a web interface and must serve the data will lessman 500 milliseconds or latency to end users. The solution also must be highly availableand must have an RTO or 30 seconds.Which solution will meet these requirements?

A. Use an Amazon Redshift cluster to store the data. Use a state website that is hosted onAmazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service(Amazon EKS) cluster to provide the reports to the application.
B. Use Amazon S3 to store the data Use Amazon Athena to provide the reports to theapplication. Use AWS App Runner to serve the application to view the reports.
C. Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStightdashboard with direct Query datasets to provide the reports to the application.
D. Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS ElasticBeanstalk to provide the reports to the application.

Question # 24

A company is planning to migrate an on-premises data center to AWS. The companycurrently hosts the data center on Linux-based VMware VMs. A solutions architect mustcollect information about network dependencies between the VMs. The information mustbe in the form of a diagram that details host IP addresses, hostnames, and networkconnection information.Which solution will meet these requirements?

A. Use AWS Application Discovery Service. Select an AWS Migration Hub home AWSRegion. Install the AWS Application Discovery Agent on the on-premises servers for datacollection. Grant permissions to Application Discovery Service to use the Migration Hubnetwork diagrams.
B. Use the AWS Application Discovery Service Agentless Collector for server datacollection. Export the network diagrams from the AWS Migration Hub in .png format.
C. Install the AWS Application Migration Service agent on the on-premises servers for datacollection. Use AWS Migration Hub data in Workload Discovery on AWS to generatenetwork diagrams.
D. Install the AWS Application Migration Service agent on the on-premises servers for datacollection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatchdashboard to generate network diagrams.

Question # 25

A company maintains information on premises in approximately 1 million .csv files that arehosted on a VM. The data initially is 10 TB in size and grows at a rate of 1 TB each week.The company needs to automate backups of the data to the AWS Cloud.Backups of the data must occur daily. The company needs a solution that applies customfilters to back up only a subset of the data that is located in designated source directories.The company has set up an AWS Direct Connect connection.Which solution will meet the backup requirements with the LEAST operational overhead?

A. Use the Amazon S3 CopyObject API operation with multipart upload to copy the existingdata to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3daily.
B. Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule thebackup plan to run daily.
C. Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor.Configure a DataSync task to replicate the data to Amazon S3 daily.
D. Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync forincremental backups to Amazon S3 daily.

Question # 26

A company needs to migrate an on-premises SFTP site to AWS. The SFTP site currentlyruns on a Linux VM. Uploaded files are made available to downstream applications throughan NFS share.As part of the migration to AWS, a solutions architect must implement high availability. Thesolution must provide external vendors with a set of static public IP addresses that thevendors can allow. The company has set up an AWS Direct Connect connection betweenits on-premises data center and its VPC.Which solution will meet these requirements with the least operational overhead?

A. Create an AWS Transfer Family server, configure an internet-facing VPC endpoint forthe Transfer Family server, specify an Elastic IP address for each subnet, configure theTransfer Family server to pace files into an Amazon Elastic Files System (Amazon EFS)file system that is deployed across multiple Availability Zones Modify the configuration onthe downstream applications that access the existing NFS share to mount the EFSendpoint instead.
B. Create an AWS Transfer Family server. Configure a publicly accessible endpoint for theTransfer Family server. Configure the Transfer Family server to place files into an AmazonElastic Files System [Amazon EFS} the system that is deployed across multiple AvailabilityZones. Modify the configuration on the downstream applications that access the existingNFS share to mount the its endpoint instead.
C. Use AWS Application Migration service to migrate the existing Linux VM to an AmazonEC2 instance. Assign an Elastic IP address to the EC2 instance. Mount an Amazon ElasticFie system (Amazon EFS) the system to the EC2 instance. Configure the SFTP server toplace files in. the EFS file system. Modify the configuration on the downstream applicationsthat access the existing NFS share to mount the EFS endpoint instead.
D. Use AWS Application Migration Service to migrate the existing Linux VM to an AWSTransfer Family server. Configure a publicly accessible endpoint for the Transfer Familyserver. Configure the Transfer Family sever to place files into an Amazon FSx for Lusterthe system that is deployed across multiple Availability Zones. Modify the configuration onthe downstream applications that access the existing NFS share to mount the FSx forLuster endpoint instead.

Question # 27

A company's factory and automaton applications are running in a single VPC More than 23applications run on a combination of Amazon EC2, Amazon Elastic Container Service(Amazon ECS), are Amazon RDS.The company has software engineers spread across three teams. One of the three teamsowns each application, and each team is responsible for the cost and performance of all ofits applications. Team resources have tags that represent their application and team. Thelearns use IAH access for daily activities.The company needs to determine which costs on the monthly AWS bill are attributable toeach application or team. The company also must be able to create reports to comparecosts item the last 12 months and to help forecast costs tor the next 12 months. A solutionarchitect must recommend an AWS Billing and Cost Management solution that provides these cost reports.Which combination of actions will meet these requirement? Select THREE.)

A. Activate the user-defined cost allocation tags that represent the application and theteam.
B. Activate the AWS generated cost allocation tags that represent the application and theteam.
C. Create a cost category for each application in Billing and Cost Management
D. Activate IAM access to Billing and Cost Management.
E. Create a cost budget
F. Enable Cost Explorer.

Question # 28

A company's compliance audit reveals that some Amazon Elastic Block Store (AmazonEBS) volumes that were created in an AWS account were not encrypted. A solutionsarchitect must Implement a solution to encrypt all new EBS volumes at restWhich solution will meet this requirement with the LEAST effort?

A. Create an Amazon EventBridge rule to detect the creation of unencrypted EBS volumes.Invoke an AWS Lambda function to delete noncompliant volumes.
B. Use AWS Audit Manager with data encryption.
C. Create an AWS Config rule to detect the creation of a new EBS volume. Encrypt thevolume by using AWS Systems Manager Automation.
D. Turn in EBS encryption by default in all AWS Regions.

Question # 29

A company is preparing to deploy an Amazon Elastic Kubernetes Service (Amazon EKS)cluster for a workload. The company expects the cluster to support anunpredictable number of stateless pods. Many of the pods will be created during a shorttime period as the workload automatically scales the number of replicas that the workloaduses.Which solution will MAXIMIZE node resilience?

A. Use a separate launch template to deploy the EKS control plane into a second clusterthat is separate from the workload node groups.
B. Update the workload node groups. Use a smaller number of node groups and largerinstances in the node groups.
C. Configure the Kubernetes Cluster Autoscaler to ensure that the compute capacity of theworkload node groups stays under provisioned.
D. Configure the workload to use topology spread constraints that are based on AvailabilityZone.

Question # 30

A company wants to design a disaster recovery (DR) solution for an application that runs inthe company's data center. The application writes to an SMB file share and creates a copyon a second file share. Both file shares are in the data center. The application uses twotypes of files: metadata files and image files.The company wants to store the copy on AWS. The company needs the ability to use SMBto access the data from either the data center or AWS if a disaster occurs. The copy of thedata is rarely accessed but must be available within 5 minutes.Which solution will meet these requirements MOST cost-effectively?

A. Deploy AWS Outposts with Amazon S3 storage. Configure a Windows Amazon EC2instance on Outposts as a file server.
B. Deploy an Amazon FSx File Gateway. Configure an Amazon FSx for Windows FileServer Multi-AZ file system that uses SSD storage.
C. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3Standard-Infrequent Access (S3 Standard-IA) for the metadata files and to use S3 GlacierDeep Archive for the image files.
D. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3Standard-Infrequent Access (S3 Standard-IA) for the metadata files and image files.

Question # 31

A solutions architect needs to improve an application that is hosted in the AWS Cloud. Theapplication uses an Amazon Aurora MySQL DB instance that is experiencing overloadedconnections. Most of the application's operations insert records into the database. Theapplication currently stores credentials in a text-based configuration file.The solutions architect needs to implement a solution so that the application can handle thecurrent connection load. The solution must keep the credentials secure and must providethe ability to rotate the credentials automatically on a regular basis.Which solution will meet these requirements?

A. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connectioncredentials as a secret in AWS Secrets Manager.
B. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connectioncredentials in AWS Systems Manager Parameter Store.
C. Create an Aurora Replica. Store the connection credentials as a secret in AWS SecretsManager.
D. Create an Aurora Replica. Store the connection credentials in AWS Systems ManagerParameter Store.

Question # 32

A company is migrating an on-premises application and a MySQL database to AWS. Theapplication processes highly sensitive data, and new data is constantly updated in thedatabase. The data must not be transferred over the internet. The company also mustencrypt the data in transit and at rest.The database is 5 TB in size. The company already has created the database schema inan Amazon RDS for MySQL DB instance. The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF.A solutions architect needs to design a solution that will migrate the data to AWS with theleast possible downtime.Which solution will meet these requirements?

A. Perform a database backup. Copy the backup files to an AWS Snowball Edge StorageOptimized device. Import the backup to Amazon S3. Use server-side encryption withAmazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS forencryption in transit. Import the data from Amazon S3 to the DB instance.
B. Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Createa DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS.Configure a DMS task to copy data from the on-premises database to the DB instance byusing full load plus change data capture (CDC). Use the AWS Key Management Service(AWS KMS) default key for encryption at rest. Use TLS for encryption in transit.
C. Perform a database backup. Use AWS DataSync to transfer the backup files to AmazonS3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) forencryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to theDB instance.
D. Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWSPrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use serversideencryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest.Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.

Question # 33

A company is serving files to its customers through an SFTP server that is accessible overthe internet The SFTP server is running on a single Amazon EC2 instance with an ElasticIP address attached Customers connect to the SFTP server through its Elastic IP addressand use SSH for authentication The EC2 instance also has an attached security group thatallows access from all customer IP addresses.A solutions architect must implement a solution to improve availability minimize thecomplexity of infrastructure management and minimize the disruption to customers whoaccess files. The solution must not change the way customers connectWhich solution will meet these requirements?

A. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucketto be used for SFTP file hosting Create an AWS Transfer Family server. Configure theTransfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IPaddress with the new endpoint. Point the Transfer Family server to the S3 bucket Sync allfiles from the SFTP server to the S3 bucket.
B. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucketto be used for SFTP file hosting Create an AWS Transfer Family Server Configure theTransfer Family server with a VPC-hosted, internet-facing endpoint Associate the SFTPElastic IP address with the new endpoint Attach the security group with customer IPaddresses to the new endpoint Point the Transfer Family server to the S3 bucket. Sync allfiles from the SFTP server to the S3 bucket.
C. Disassociate the Elastic IP address from the EC2 instance. Create a new AmazonElastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create anAWS Fargate task definition to run an SFTP server Specify the EFS file system as a mountin the task definition Create a Fargate service by using the task definition, and place aNetwork Load Balancer (NLB) in front of the service. When configuring the service, attachthe security group with customer IP addresses to the tasks that run the SFTP serverAssociate the Elastic IP address with the NLB Sync all files from the SFTP server to the S3bucket.
D. Disassociate the Elastic IP address from the EC2 instance. Create a multi-attachAmazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting.Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create anAuto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scalinggroup that instances that are launched should attach the new multi-attach EBS volumeConfigure the Auto Scaling group to automatically add instances behind the NLB. configurethe Auto Scaling group to use the security group that allows customer IP addresses for theEC2 instances that the Auto Scaling group launches Sync all files from the SFTP server tothe new multi-attach EBS volume.

Question # 34

An online retail company hosts its stateful web-based application and MySQL database inan on-premises data center on a single server. The company wants to increase itscustomer base by conducting more marketing campaigns and promotions. In preparation,the company wants to migrate its application and database to AWS to increase thereliability of its architecture.Which solution should provide the HIGHEST level of reliability?

A. Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy theapplication in an Auto Scaling group on Amazon EC2 instances behind an Application LoadBalancer. Store sessions in Amazon Neptune.
B. Migrate the database to Amazon Aurora MySQL. Deploy the application in an AutoScaling group on Amazon EC2 instances behind an Application Load Balancer. Storesessions in an Amazon ElastiCache for Redis replication group.
C. Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploythe application in an Auto Scaling group on Amazon EC2 instances behind a Network LoadBalancer. Store sessions in Amazon Kinesis Data Firehose.
D. Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy theapplication in an Auto Scaling group on Amazon EC2 instances behind an Application LoadBalancer. Store sessions in Amazon ElastiCache for Memcached.

Question # 35

A car rental company has built a serverless REST API to provide data to its mobile app.The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambdafunctions, and an Amazon Aurora MySQL Serverless DB cluster. The company recentlyopened the API to mobile apps of partners. A significant increase in the number of requestsresulted, causing sporadic database memory errors. Analysis of the API traffic indicatesthat clients are making multiple HTTP GET requests for the same queries in a short periodof time. Traffic is concentrated during business hours, with spikes around holidays andother events.The company needs to improve its ability to support the additional usage while minimizingthe increase in costs associated with the solution.Which strategy meets these requirements?

A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enablecaching in the production stage.
B. Implement an Amazon ElastiCache for Redis cache to store the results of the databasecalls. Modify the Lambda functions to use the cache.
C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amountof available memory.
D. Enable throttling in the API Gateway production stage. Set the rate and burst values tolimit the incoming calls.

Question # 36

A company has a web application that securely uploads pictures and videos to an AmazonS3 bucket. The company requires that only authenticated users are allowed to postcontent. The application generates a presigned URL that is used to upload objects througha browser interface. Most users are reporting slow upload times for objects larger than 100MB.What can a Solutions Architect do to improve the performance of these uploads whileensuring only authenticated users are allowed to post content?

A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has aresource as an S3 service proxy. Configure the PUT method for this resource to exposethe S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLSauthorizer. Have the browser interface use API Gateway instead of the presigned URL toupload objects.
B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as anS3 service proxy. Configure the PUT method for this resource to expose the S3 PutObjectoperation. Secure the API Gateway using an AWS Lambda authorizer. Have the browserinterface use API Gateway instead of the presigned URL to upload API objects.
C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint whengenerating the presigned URL. Have the browser interface upload the objects to this URLusing the S3 multipart upload API.
D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUTand POST methods for the CloudFront cache behavior. Update the CloudFront origin touse an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution.

Question # 37

A company has a website that runs on four Amazon EC2 instances that are behind anApplication Load Balancer (ALB). When the ALB detects that an EC2 instance is no longeravailable, an Amazon CloudWatch alarm enters the ALARM state. A member of thecompany's operations team then manually adds a new EC2 instance behind the ALB.A solutions architect needs to design a highly available solution that automatically handlesthe replacement of EC2 instances. The company needs to minimize downtime during theswitch to the new solution.Which set of steps should the solutions architect take to meet these requirements?

A. Delete the existing ALB. Create an Auto Scaling group that is configured to handle theweb application traffic. Attach a new launch template to the Auto Scaling group. Create anew ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instancesto the Auto Scaling group.
B. Create an Auto Scaling group that is configured to handle the web application traffic.Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group tothe existing ALB. Attach the existing EC2 instances to the Auto Scaling group.
C. Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that isconfigured to handle the web application traffic. Attach a new launch template to the AutoScaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait forthe Auto Scaling group to launch the minimum number of EC2 instances.
D. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group tothe existing ALB. Wait for the existing ALB to register the existing EC2 instances with theAuto Scaling group.

Question # 38

A company is deploying a third-party firewall appliance solution from AWS Marketplace tomonitor and protect traffic that leaves the company's AWS environments. The companywants to deploy this appliance into a shared services VPC and route all outbound internetboundtraffic through the appliances.A solutions architect needs to recommend a deployment method that prioritizes reliabilityand minimizes failover time between firewall appliances within a single AWS Region. Thecompany has set up routing from the shared services VPC to other VPCs.Which steps should the solutions architect recommend to meet these requirements?(Select THREE.)

A. Deploy two firewall appliances into the shared services VPC, each in a separateAvailability Zone.
B. Create a new Network Load Balancer in the shared services VPC. Create a new targetgroup, and attach it to the new Network Load Balancer. Add each of the firewall applianceinstances to the target group.
C. Create a new Gateway Load Balancer in the shared services VPC. Create a new targetgroup, and attach it to the new Gateway Load Balancer. Add each of the firewall applianceinstances to the target group.
D. Create a VPC interface endpoint. Add a route to the route table in the shared servicesVPC. Designate the new endpoint as the next hop for traffic that enters the shared servicesVPC from other VPCs.
E. Deploy two firewall appliances into the shared services VPC. each in the sameAvailability Zone.
F. Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in theshared services VPC. Designate the new endpoint as the next hop for traffic that enters theshared services VPC from other VPCs.

Question # 39

An ecommerce company runs an application on AWS. The application has an Amazon APIGateway API that invokes an AWS Lambda function. The data is stored in an Amazon RDSfor PostgreSQL DB instance.During the company's most recent flash sale, a sudden increase in API calls negativelyaffected the application's performance. A solutions architect reviewed the AmazonCloudWatch metrics during that time and noticed a significant increase in Lambdainvocations and database connections. The CPU utilization also was high on the DBinstance.What should the solutions architect recommend to optimize the application's performance?

A. Increase the memory of the Lambda function. Modify the Lambda function to close thedatabase connections when the data is retrieved.
B. Add an Amazon ElastiCache for Redis cluster to store the frequently accessed datafrom the RDS database.
C. Create an RDS proxy by using the Lambda console. Modify the Lambda function to usethe proxy endpoint.
D. Modify the Lambda function to connect to the database outside of the function's handler.Check for an existing database connection before creating a new connection.

Question # 40

A company hosts a software as a service (SaaS) solution on AWS. The solution has anAmazon API Gateway API that serves an HTTPS endpoint. The API uses AWS Lambdafunctions for compute. The Lambda functions store data in an Amazon Aurora ServerlessVI database.The company used the AWS Serverless Application Model (AWS SAM) to deploy thesolution. The solution extends across multiple Availability Zones and has no disasterrecovery (DR) plan.A solutions architect must design a DR strategy that can recover the solution in anotherAWS Region. The solution has an R TO of 5 minutes and an RPO of 1 minute.What should the solutions architect do to meet these requirements?

A. Create a read replica of the Aurora Serverless VI database in the target Region. UseAWS SAM to create a runbook to deploy the solution to the target Region. Promote theread replica to primary in case of disaster.
B. Change the Aurora Serverless VI database to a standard Aurora MySQL globaldatabase that extends across the source Region and the target Region. Use AWS SAM tocreate a runbook to deploy the solution to the target Region.
C. Create an Aurora Serverless VI DB cluster that has multiple writer instances in the targetRegion. Launch the solution in the target Region. Configure the two Regional solutions towork in an active-passive configuration.
D. Change the Aurora Serverless VI database to a standard Aurora MySQL globaldatabase that extends across the source Region and the target Region. Launch thesolution in the target Region. Configure the two Regional solutions to work in an activepassiveconfiguration.

Question # 41

A company is deploying a new cluster for big data analytics on AWS. The cluster will runacross many Linux Amazon EC2 instances that are spread across multiple AvailabilityZones.All of the nodes in the cluster must have read and write access to common underlying filestorage. The file storage must be highly available, must be resilient, must be compatiblewith the Portable Operating System Interface (POSIX). and must accommodate high levelsof throughput.Which storage solution will meet these requirements?

A. Provision an AWS Storage Gateway file gateway NFS file share that is attached to anAmazon S3 bucket. Mount the NFS file share on each EC2 instance in the duster.
B. Provision a new Amazon Elastic File System (Amazon EFS) file system that usesGeneral Purpose performance mode. Mount the EFS file system on each EC2 instance inthe cluster.
C. Provision a new Amazon Elastic Block Store (Amazon EBS) volume that uses the io2volume type. Attach the EBS volume to all of the EC2 instances in the cluster.
D. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses MaxI/O performance mode. Mount the EFS file system on each EC2 instance in the cluster.

Question # 42

A company deploys a new web application. As pari of the setup, the company configuresAWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The companydevelops an Amazon Athena query that runs once daily to return AWS WAF log data fromthe previous 24 hours. The volume of daily logs is constant. However, over time, the samequery is taking more time to run.A solutions architect needs to design a solution to prevent the query time from continuing toincrease. The solution must minimize operational overhead.Which solution will meet these requirements?

A. Create an AWS Lambda function that consolidates each day's AWS WAF logs into onelog file.
B. Reduce the amount of data scanned by configuring AWS WAF to send logs to adifferent S3 bucket each day.
C. Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 bydate and time. Create external tables for Amazon Redshift. Configure Amazon RedshiftSpectrum to query the data source.
D. Modify the Kinesis Data Firehose configuration and Athena table definition to partitionthe data by date and time. Change the Athena query to view the relevant partitions.

Question # 43

A solutions architect has an operational workload deployed on Amazon EC2 instances inan Auto Scaling Group The VPC architecture spans two Availability Zones (AZ) with asubnet in each that the Auto Scaling group is targeting. The VPC is connected to an onpremisesenvironment and connectivity cannot be interrupted The maximum size of theAuto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:VPCCIDR 10 0 0 0/23AZ1 subnet CIDR: 10 0 0 0724AZ2 subnet CIDR: 10.0.1 0724Since deployment, a third AZ has become available in the Region The solutions architectwants to adopt the new AZ without adding additional IPv4 address space and withoutservice downtime. Which solution will meet these requirements?

A. Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use theAZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using thesecond half of the address space from the original AZ1 subnet Create a new AZ3 subnetusing half the original AZ2 subnet address space, then update the Auto Scaling group totarget all three new subnets.
B. Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnetusing hall the address space. Update the Auto Scaling group to use this new subnet.Repeat this for the second AZ. Define a new subnet in AZ3: then update the Auto Scalinggroup to target all three new subnets
C. Create a new VPC with the same IPv4 address space and define three subnets, withone for each AZ Update the existing Auto Scaling group to target the new subnets in thenew VPC
D. Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet tohave halt the previous address space Adjust the Auto Scaling group to also use the AZ1subnet again. When the instances are healthy, adjust the Auto Seating group to use theAZ1 subnet only. Update the current AZ2 subnet and assign the second half of the addressspace from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2subnet address space, then update the Auto Scaling group to target all three new subnets

Question # 44

A data analytics company has an Amazon Redshift cluster that consists of several reservednodes. The cluster is experiencing unexpected bursts of usage because a team ofemployees is compiling a deep audit analysis report. The queries to generate the report arecomplex read queries and are CPU intensive.Business requirements dictate that the cluster must be able to service read and writequeries at all times. A solutions architect must devise a solution that accommodates thebursts of usage.Which solution meets these requirements MOST cost-effectively?

A. Provision an Amazon EMR cluster. Offload the complex data processing tasks.
B. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster byusing a classic resize operation when the cluster's CPU metrics in Amazon CloudWatchreach 80%.
C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster byusing an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatchreach 80%.
D. Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.

Question # 45

An online survey company runs its application in the AWS Cloud. The application isdistributed and consists of microservices that run in an automatically scaled AmazonElastic Container Service (Amazon ECS) cluster. The ECS cluster is a target for anApplication Load Balancer (ALB). The ALB is a custom origin for an Amazon CloudFrontdistribution.The company has a survey that contains sensitive data. The sensitive data must beencrypted when it moves through the application. The application's data-handlingmicroservice is the only microservice that should be able to decrypt the data.Which solution will meet these requirements?

A. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated tothe data-handling microservice. Create a field-level encryption profile and a configuration.Associate the KMS key and the configuration with the CloudFront cache behavior.
B. Create an RSA key pair that is dedicated to the data-handling microservice. Upload thepublic key to the CloudFront distribution. Create a field-level encryption profile and aconfiguration. Add the configuration to the CloudFront cache behavior.
C. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated tothe data-handling microservice. Create a Lambda@Edge function. Program the function touse the KMS key to encrypt the sensitive data.
D. Create an RSA key pair that is dedicated to the data-handling microservice. Create aLambda@Edge function. Program the function to use the private key of the RSA key pair toencrypt the sensitive data.

Question # 46

A company uses an organization in AWS Organizations to manage the company's AWSaccounts. The company uses AWS CloudFormation to deploy all infrastructure. A financeteam wants to buikJ a chargeback model The finance team asked each business unit to tagresources by using a predefined list of project values.When the finance team used the AWS Cost and Usage Report in AWS Cost Explorer andfiltered based on project, the team noticed noncompliant project values. The companywants to enforce the use of project tags for new resources.Which solution will meet these requirements with the LEAST effort?

A. Create a tag policy that contains the allowed project tag values in the organization'smanagement account. Create an SCP that denies the cloudformation:CreateStack APIoperation unless a project tag is added. Attach the SCP to each OU.
B. Create a tag policy that contains the allowed project tag values in each OU. Create anSCP that denies the cloudformation:CreateStack API operation unless a project tag isadded. Attach the SCP to each OU.
C. Create a tag policy that contains the allowed project tag values in the AWS managementaccount. Create an 1AM policy that denies the cloudformation:CreateStack API operationunless a project tag is added. Assign the policy to each user.
D. Use AWS Service Catalog to manage the CloudFoanation stacks as products. Use aTagOptions library to control project tag values. Share the portfolio with all OUs that are inthe organization.

Question # 47

A company is running a serverless application that consists of several AWS Lambdafunctions and Amazon DynamoDB tables. The company has created new functionality thatrequires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DBcluster is located in three subnets in a VPC.Which of the possible solutions will allow the Lambda functions to access the Neptune DBcluster and DynamoDB tables? (Select TWO.)

A. Create three public subnets in the Neptune VPC, and route traffic through an internetgateway. Host the Lambda functions in the three new public subnets.
B. Create three private subnets in the Neptune VPC, and route internet traffic through aNAT gateway. Host the Lambda functions in the three new private subnets.
C. Host the Lambda functions outside the VPC. Update the Neptune security group to allowaccess from the IP ranges of the Lambda functions.
D. Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptunedatabase, and have the Lambda functions access Neptune over the VPC endpoint.
E. Create three private subnets in the Neptune VPC. Host the Lambda functions in thethree new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDBtraffic to the VPC endpoint.

Question # 48

A company is running multiple workloads in the AWS Cloud. The company has separateunits for software development. The company uses AWS Organizations and federation withSAML to give permissions to developers to manage resources in their AWS accounts. Thedevelopment units each deploy their production workloads into a common productionaccount.Recently, an incident occurred in the production account in which members of adevelopment unit terminated an EC2 instance that belonged to a different developmentunit. A solutions architect must create a solution that prevents a similar incident fromhappening in the future. The solution also must allow developers the possibility to managethe instances used for their workloads.Which strategy will meet these requirements?

A. Create separate OUs in AWS Organizations for each development unit. Assign thecreated OUs to the company AWS accounts. Create separate SCPs with a deny action anda StringNotEquals condition for the DevelopmentUnit resource tag that matches thedevelopment unit name. Assign the SCP to the corresponding OU.
B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)session tag during SAML federation. Update the IAM policy for the developers' assumedIAM role with a deny action and a StringNotEquals condition for the DevelopmentUnitresource tag and aws:PrincipalTag/ DevelopmentUnit.
C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)session tag during SAML federation. Create an SCP with an allow action and aStringEquals condition for the DevelopmentUnit resource tag andaws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU.
D. Create separate IAM policies for each development unit. For every IAM policy, add anallow action and a StringEquals condition for the DevelopmentUnit resource tag and thedevelopment unit name. During SAML federation, use AWS Security Token Service (AWSSTS) to assign the IAM policy and match the development unit name to the assumed IAMrole.

Question # 49

A company has an organization in AWS Organizations that includes a separate AWSaccount for each of the company's departments. Application teams from differentdepartments develop and deploy solutions independently.The company wants to reduce compute costs and manage costs appropriately acrossdepartments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the companyselects compute resources.Which solution will meet these requirements?

A. Use AWS Budgets for each department. Use Tag Editor to apply tags to appropriateresources. Purchase EC2 Instance Savings Plans.
B. Configure AWS Organizations to use consolidated billing. Implement a tagging strategythat identifies departments. Use SCPs to apply tags to appropriate resources. PurchaseEC2 Instance Savings Plans.
C. Configure AWS Organizations to use consolidated billing. Implement a tagging strategythat identifies departments. Use Tag Editor to apply tags to appropriate resources.Purchase Compute Savings Plans.
D. Use AWS Budgets for each department. Use SCPs to apply tags to appropriateresources. Purchase Compute Savings Plans.

Question # 50

A company is developing a web application that runs on Amazon EC2 instances in an AutoScaling group behind a public-facing Application Load Balancer (ALB). Only users from aspecific country are allowed to access the application. The company needs the ability to logthe access requests that have been blocked. The solution should require the least possiblemaintenance.Which solution meets these requirements?

A. Create an IPSet containing a list of IP ranges that belong to the specified country.Create an AWS WAF web ACL. Configure a rule to block any requests that do not originatefrom an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACLwith the ALB.
B. Create an AWS WAF web ACL. Configure a rule to block any requests that do notoriginate from the specified country. Associate the rule with the web ACL. Associate theweb ACL with the ALB.
C. Configure AWS Shield to block any requests that do not originate from the specifiedcountry. Associate AWS Shield with the ALB.
D. Create a security group rule that allows ports 80 and 443 from IP ranges that belong tothe specified country. Associate the security group with the ALB.

Question # 51

A company is migrating to the cloud. It wants to evaluate the configurations of virtualmachines in its existing data center environment to ensure that it can size new AmazonEC2 instances accurately. The company wants to collect metrics, such as CPU. memory,and disk utilization, and it needs an inventory of what processes are running on eachinstance. The company would also like to monitor network connections to mapcommunications between servers.Which would enable the collection of this data MOST cost effectively?

A. Use AWS Application Discovery Service and deploy the data collection agent to eachvirtual machine in the data center.
B. Configure the Amazon CloudWatch agent on all servers within the local environmentand publish metrics to Amazon CloudWatch Logs.
C. Use AWS Application Discovery Service and enable agentless discovery in the existingvisualization environment.
D. Enable AWS Application Discovery Service in the AWS Management Console andconfigure the corporate firewall to allow scans over a VPN.

Question # 52

A company uses AWS Organizations to manage a multi-account structure. The companyhas hundreds of AWS accounts and expects the number of accounts to increase. Thecompany is building a new application that uses Docker images. The company will pushthe Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accountsthat are within the company's organization should haveaccess to the images.The company has a CI/CD process that runs frequently. The company wants to retain allthe tagged images. However, the company wants to retain only the five most recent untagged images.Which solution will meet these requirements with the LEAST operational overhead?

A. Create a private repository in Amazon ECR. Create a permissions policy for therepository that allows only required ECR operations. Include a condition to allow the ECRoperations if the value of the aws:PrincipalOrglD condition key is equal to the ID of thecompany's organization. Add a lifecycle rule to the ECR repository that deletes alluntagged images over the count of five.
B. Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Setpermissions so that any account can assume the role if the value of the aws:PrincipalOrglDcondition key is equal to the ID of the company's organization. Add a lifecycle rule to theECR repository that deletes all untagged images over the count of five.
C. Create a private repository in Amazon ECR. Create a permissions policy for therepository that includes only required ECR operations. Include a condition to allow the ECRoperations for all account IDs in the organization. Schedule a daily Amazon EventBridgerule to invoke an AWS Lambda function that deletes all untagged images over the count offive.
D. Create a public repository in Amazon ECR. Configure Amazon ECR to use an interfaceVPC endpoint with an endpoint policy that includes the required permissions for imagesthat the company needs to pull. Include a condition to allow the ECR operations for allaccount IDs in the company's organization. Schedule a daily Amazon EventBridge rule toinvoke an AWS Lambda function that deletes all untagged images over the count of five.

Question # 53

A company wants to send data from its on-premises systems to Amazon S3 buckets. Thecompany created the S3 buckets in three different accounts. The company must send thedata privately without the data traveling across the internet The company has no existingdedicated connectivity to AWSWhich combination of steps should a solutions architect take to meet these requirements?(Select TWO.)

A. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Set up an AWS Direct Connect connection with a private VIF betweenthe on-premises environment and the private VPC.
B. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Set up an AWS Direct Connect connection with a public VlF betweenthe on-premises environment and the private VPC.
C. Create an Amazon S3 interface endpoint in the networking account.
D. Create an Amazon S3 gateway endpoint in the networking account.
E. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Peer VPCs from the accounts that host the S3 buckets with the VPCin the network account.

Question # 54

A company runs an unauthenticated static website (www.example.com) that includes aregistration form for users. The website uses Amazon S3 for hosting and uses AmazonCloudFront as the content delivery network with AWS WAF configured. When theregistration form is submitted, the website calls an Amazon API Gateway API endpoint thatinvokes an AWS Lambda function to process the payload and forward the payload to anexternal API call.During testing, a solutions architect encounters a cross-origin resource sharing (CORS)error. The solutions architect confirms that the CloudFront distribution origin has theAccess-Control-Allow-Origin header set to www.example.com.What should the solutions architect do to resolve the error?

A. Change the CORS configuration on the S3 bucket. Add rules for CORS to the AllowedOrigin element for www.example.com.
B. Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-Control-Allow-Origin header is set to www.example.com.
C. Enable the CORS setting on the API Gateway API endpoint. Ensure that the APIendpoint is configured to return all responses that have the Access-Control -Allow-Originheader set to www.example.com.
D. Enable the CORS setting on the Lambda function. Ensure that the return code of thefunction has the Access-Control-Allow-Origin header set to www.example.com.

Question # 55

A company migrated an application to the AWS Cloud. The application runs on twoAmazon EC2 instances behind an Application Load Balancer (ALB). Application data isstored in a MySQL database that runs on an additional EC2 instance. The application's useof the database is read-heavy.The loads static content from Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. The static content is updated frequently and must becopied to each EBS volume.The load on the application changes throughout the day. During peak hours, the applicationcannot handle all the incoming requests. Trace data shows that the database cannothandle the read load during peak hours.Which solution will improve the reliability of the application?

A. Migrate the application to a set of AWS Lambda functions. Set the Lambda functions astargets for the ALB. Create a new single EBS volume for the static content. Configure theLambda functions to read from the new EBS volume. Migrate the database to an AmazonRDS for MySQL Multi-AZ DB cluster.
B. Migrate the application to a set of AWS Step Functions state machines. Set the statemachines as targets for the ALB. Create an Amazon Elastic File System (Amazon EFS) filesystem for the static content. Configure the state machines to read from the EFS filesystem. Migrate the database to Amazon Aurora MySQL Serverless v2 with a reader DBinstance.
C. Containerize the application. Migrate the application to an Amazon Elastic ContainerService (Amazon ECS) Cluster. Use the AWS Fargate launch type for the tasks that hostthe application. Create a new single EBS volume the static content. Mount the new EBSvolume on the ECS duster. Configure AWS Application Auto Scaling on ECS cluster. Setthe ECS service as a target for the ALB. Migrate the database to an Amazon RDS forMySOL Multi-AZ DB cluster.
D. Containerize the application. Migrate the application to an Amazon Elastic ContainerService (Amazon ECS) cluster. Use the AWS Fargate launch type for the tasks that hostthe application. Create an Amazon Elastic File System (Amazon EFS) file system for thestatic content. Mount the EFS file system to each container. Configure AWS ApplicationAuto Scaling on the ECS cluster Set the ECS service as a target for the ALB. Migrate thedatabase to Amazon Aurora MySQL Serverless v2 with a reader DB instance.

Question # 56

A company is using Amazon API Gateway to deploy a private REST API that will provideaccess to sensitive data. The API must be accessible only from an application that is deployed in a VPC. The company deploys the API successfully. However, the API is notaccessible from an Amazon EC2 instance that is deployed in the VPC.Which solution will provide connectivity between the EC2 instance and the API?

A. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy thatallows apigateway:* actions. Disable private DNS naming for the VPC endpoint. Configurean API resource policy that allows access from the VPC. Use the VPC endpoint's DNSname to access the API.
B. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy thatallows the execute-api:lnvoke action. Enable private DNS naming for the VPC endpoint.Configure an API resource policy that allows access from the VPC endpoint. Use the APIendpoint's DNS names to access the API. Most Voted
C. Create a Network Load Balancer (NLB) and a VPC link. Configure private integrationbetween API Gateway and the NLB. Use the API endpoint's DNS names to access theAPI.
D. Create an Application Load Balancer (ALB) and a VPC Link. Configure privateintegration between API Gateway and the ALB. Use the ALB endpoint's DNS name toaccess the API.

Question # 57

A solutions architect is creating an application that stores objects in an Amazon S3 bucket The solutions architect must deploy the application in two AWS Regions that will be used simultaneously The objects in the two S3 buckets must remain synchronized with each other. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE)

A. Use AWS Lambda functions to connect to the loT devices
B. Configure the loT devices to publish to AWS loT Core
C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare thereports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin toserve the reports
F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports

Question # 58

A solutions architect is creating an application that stores objects in an Amazon S3 bucketThe solutions architect must deploy the application in two AWS Regions that will be usedsimultaneously The objects in the two S3 buckets must remain synchronized with eachother.Which combination of steps will meet these requirements with the LEAST operationaloverhead? (Select THREE)

A. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point
B. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets
C. Modify the application to store objects in each S3 bucket.
D. Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket tothe other S3 bucket.
E. Enable S3 Versioning for each S3 bucket
F. Configure an event notification for each S3 bucket to invoke an AVVS Lambda functionto copy objects from one S3 bucket to the other S3 bucket.

Question # 59

A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application shoulddynamically scale to meet user demand and maintain resiliency. Additionally, theapplication must have disaster recover capabilities in an active-passive configuration withthe us-west-1 Region.Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?

A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones(AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs ineach Region as part of an Auto Scaling group spanning both VPCs and served by the ALB.
B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region. Create an Amazon Route 53 record set with a failover routing policy and healthchecks enabled to provide high availability across both Regions.
C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs. Deploy an Application Load Balancer (ALB) that spans both VPCs. Deploy EC2instances across multiple Availability Zones as part of an Auto Scaling group in each VPCserved by the ALB. Create an Amazon Route 53 record that points to the ALB.
D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region. Create separate Amazon Route 53 records in each Region that point to the ALB inthe Region. Use Route 53 health checks to provide high availability across both Regions.

Question # 60

A company needs to monitor a growing number of Amazon S3 buckets across two AWSRegions. The company also needs to track the percentage of objects that areencrypted in Amazon S3. The company needs a dashboard to display this information forinternal compliance teams.Which solution will meet these requirements with the LEAST operational overhead?

A. Create a new S3 Storage Lens dashboard in each Region to track bucket andencryption metrics. Aggregate data from both Region dashboards into a single dashboardin Amazon QuickSight for the compliance teams.
B. Deploy an AWS Lambda function in each Region to list the number of buckets and theencryption status of objects. Store this data in Amazon S3. Use Amazon Athena queries todisplay the data on a custom dashboard in Amazon QuickSight for the compliance teams.
C. Use the S3 Storage Lens default dashboard to track bucket and encryption metrics.Give the compliance teams access to the dashboard directly in the S3 console.
D. Create an Amazon EventBridge rule to detect AWS Cloud Trail events for S3 objectcreation. Configure the rule to invoke an AWS Lambda function to record encryptionmetrics in Amazon DynamoDB. Use Amazon QuickSight to display the metrics in adashboard for the compliance teams.

Question # 61

A financial services company runs a complex, multi-tier application on Amazon EC2instances and AWS Lambda functions. The application stores temporary data in AmazonS3. The S3 objects are valid for only 45 minutes and are deleted after 24 hours.The company deploys each version of the application by launching an AWSCloudFormation stack. The stack creates all resources that are required to run theapplication. When the company deploys and validates a new application version, thecompany deletes the CloudFormation stack of the old version.The company recently tried to delete the CloudFormation stack of an old applicationversion, but the operation failed. An analysis shows that CloudFormation failed to delete anexisting S3 bucket. A solutions architect needs to resolve this issue without making majorchanges to the application's architecture.Which solution meets these requirements?

A. Implement a Lambda function that deletes all files from a given S3 bucket. Integrate thisLambda function as a custom resource into the CloudFormation stack. Ensure that thecustom resource has a DependsOn attribute that points to the S3 bucket's resource.
B. Modify the CloudFormation template to provision an Amazon Elastic File System(Amazon EFS) file system to store the temporary files there instead of in Amazon S3.Configure the Lambda functions to run in the same VPC as the file system. Mount the filesystem to the EC2 instances and Lambda functions.
C. Modify the CloudFormation stack to create an S3 Lifecycle rule that expires all objects45 minutes after creation. Add a DependsOn attribute that points to the S3 bucket'sresource.
D. Modify the CloudFormation stack to attach a DeletionPolicy attribute with a value ofDelete to the S3 bucket.

Question # 62

A company is currently in the design phase of an application that will need an RPO of lessthan 5 minutes and an RTO of less than 10 minutes. The solutions architecture team isforecasting that the database will store approximately 10 TB of data. As part of the design,they are looking for a database solution that will provide the company with the ability to failover to a secondary Region.Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serveas a backup in the event of a failure.
B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondaryRegion. In the event of a failure, promote the read replica to become the primary.
C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondaryRegion. Use AWS DMS to keep the secondary Region in sync.
D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event ofa failure, promote the read replica to become the primary.

Question # 63

A financial company needs to create a separate AWS account for a new digital walletapplication. The company uses AWS Organizations to manage its accounts. A solutionsarchitect uses the 1AM user Supportl from the management account to create a newmember account with finance1@example.com as the email address.What should the solutions architect do to create IAM users in the new member account?

A. Sign in to the AWS Management Console with AWS account root user credentials byusing the 64-character password from the initial AWS Organizations emailsenttofinance1@example.com. Set up the IAM users as required.
B. From the management account, switch roles to assume theOrganizationAccountAccessRole role with the account ID of the new member account. Setup the IAM users as required.
C. Go to the AWS Management Console sign-in page. Choose "Sign in using root accountcredentials." Sign in in by using the email address finance1@example.com and themanagement account's root password. Set up the IAM users as required.
D. Go to the AWS Management Console sign-in page. Sign in by using the account ID ofthe new member account and the Supportl IAM credentials. Set up the IAM users as required.

Question # 64

A company has a solution that analyzes weather data from thousands of weather stations.The weather stations send the data over an Amazon API Gateway REST API that has anAWS Lambda function integration. The Lambda function calls a third-party service for datapre-processing. The third-party service gets overloaded and fails the pre-processing,causing a loss of data.A solutions architect must improve the resiliency of the solution. The solutions architectmust ensure that no data is lost and that data can be processed later if failures occur.What should the solutions architect do to meet these requirements?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the queueas the dead-letter queue for the API.
B. Create two Amazon Simple Queue Service (Amazon SQS) queues: a primary queueand a secondary queue. Configure the secondary queue as the dead-letter queue for theprimary queue. Update the API to use a new integration to the primary queue. Configurethe Lambda function as the invocation target for the primary queue.
C. Create two Amazon EventBridge event buses: a primary event bus and a secondaryevent bus. Update the API to use a new integration to the primary event bus. Configure anEventBridge rule to react to all events on the primary event bus. Specify the Lambdafunction as the target of the rule. Configure the secondary event bus as the failuredestination for the Lambda function.
D. Create a custom Amazon EventBridge event bus. Configure the event bus as the failuredestination for the Lambda function.

Question # 65

A research center is migrating to the AWS Cloud and has moved its on-premises 1 PBobject storage to an Amazon S3 bucket. One hundred scientists are using this objectstorage to store their work-related documents. Each scientist has a personal folder on theobject store. All the scientists are members of a single IAM user group.The research center's compliance officer is worried that scientists will be able to accesseach other's work. The research center has a strict obligation to report on which scientistaccesses which documents. The team that is responsible for these reports has little AWS experience and wants aready-to-use solution that minimizes operational overhead.Which combination of actions should a solutions architect take to meet theserequirements? (Select TWO.)

A. Create an identity policy that grants the user read and write access. Add a condition thatspecifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on thescientists' IAM user group.
B. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket.Store the trail output in another S3 bucket. Use Amazon Athena to query the logs andgenerate reports.
C. Enable S3 server access logging. Configure another S3 bucket as the target for logdelivery. Use Amazon Athena to query the logs and generate reports.
D. Create an S3 bucket policy that grants read and write access to users in the scientists'IAM user group.
E. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucketand write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatchconnector to query the logs and generate reports.

Question # 66

A company is using AWS Organizations with a multi-account architecture. The company'scurrent security configuration for the account architecture includes SCPs, resource-basedpolicies, identity-based policies, trust policies, and session policies.A solutions architect needs to allow an IAM user in Account A to assume a role in AccountB.Which combination of steps must the solutions architect take to meet this requirement?(Select THREE.)

A. Configure the SCP for Account A to allow the action.
B. Configure the resource-based policies to allow the action.
C. Configure the identity-based policy on the user in Account A to allow the action.
D. Configure the identity-based policy on the user in Account B to allow the action.
E. Configure the trust policy on the target role in Account B to allow the action.
F. Configure the session policy to allow the action and to be passed programmatically bythe GetSessionToken API operation.

Question # 67

A company is migrating its infrastructure to the AWS Cloud. The company must complywith a variety of regulatory standards for different projects. The company needs a multiaccountenvironment.A solutions architect needs to prepare the baseline infrastructure. The solution mustprovide a consistent baseline of management and security, but it must allow flexibility fordifferent compliance requirements within various AWS accounts. The solution also needsto integrate with the existing on-premises Active Directory Federation Services (AD FS)server.Which solution meets these requirements with the LEAST amount of operationaloverhead?

A. Create an organization in AWS Organizations. Create a single SCP for least privilegeaccess across all accounts. Create a single OU for all accounts. Configure an IAM identityprovider for federation with the on-premises AD FS server. Configure a central loggingaccount with a defined process for log generating services to send log events to the centralaccount. Enable AWS Config in the central account with conformance packs for allaccounts.
B. Create an organization in AWS Organizations. Enable AWS Control Tower on theorganization. Review included controls (guardrails) for SCPs. Check AWS Config for areas that require additions. Add OUS as necessary. Connect AWS IAM Identity Center (AWSSingle Sign-On) to the on-premises AD FS server.
C. Create an organization in AWS Organizations. Create SCPs for least privilege access.Create an OU structure, and use it to group AWS accounts. Connect AWS IAM IdentityCenter (AWS Single Sign-On) to the on-premises AD FS server. Configure a centrallogging account with a defined process for log generating services to send log events to thecentral account. Enable AWS Config in the central account with aggregators andconformance packs.
D. Create an organization in AWS Organizations. Enable AWS Control Tower on theorganization. Review included controls (guardrails) for SCPs. Check AWS Config for areasthat require additions. Configure an IAM identity provider for federation with the onpremisesAD FS server.

Question # 68

A company needs to store and process image data that will be uploaded from mobiledevices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays,with thousands of uploads per minute. The app is rarely used at any other time. A user isnotified when image processing is complete.Which combination of actions should a solutions architect take to ensure image processingcan scale to handle the load? (Select THREE.)

A. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon MQ queue.
B. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue.
C. Invoke an AWS Lambda function to perform image processing when a message isavailable in the queue.
D. Invoke an S3 Batch Operations job to perform image processing when a message isavailable in the queue
E. Send a push notification to the mobile app by using Amazon Simple Notification Service(Amazon SNS) when processing is complete.
F. Send a push notification to the mobile app by using Amazon Simple Email Service(Amazon SES) when processing is complete.

Question # 69

A company has mounted sensors to collect information about environmental parameterssuch as humidity and light throughout all the company's factories. The company needs tostream and analyze the data in the AWS Cloud in real time. If any of the parameters fall outof acceptable ranges, the factory operations team must receive a notification immediately.Which solution will meet these requirements?

A. Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS StepFunctions to consume and analyze the data in the Kinesis Data Firehose delivery stream.use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.
B. Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK)cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate task to analyze thedata. Use Amazon Simple Email Service (Amazon SES) to notify the operations team.
C. Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function toconsume the Kinesis data stream and to analyze the data. Use Amazon Simple NotificationService (Amazon SNS) to notify the operations team.
D. Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automaticallyscaled and containerized service in Amazon Elastic Container Service (Amazon ECS) toconsume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notifythe operations team.

Question # 70

A software company needs to create short-lived test environments to test pull requests aspart of its development process. Each test environment consists of a single Amazon EC2 instance that is in an Auto Scaling group.The test environments must be able to communicate with a central server to report testresults. The central server is located in an on-premises data center. A solutions architectmust implement a solution so that the company can create and delete test environmentswithout any manual intervention. The company has created a transit gateway with a VPNattachment to the on-premises network.Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS CloudFormation template that contains a transit gateway attachmentand related routing configurations. Create a CloudFormation stack set that includes thistemplate. Use CloudFormation StackSets to deploy a new stack for each VPC in theaccount. Deploy a new VPC for each test environment.
B. Create a single VPC for the test environments. Include a transit gateway attachment andrelated routing configurations. Use AWS CloudFormation to deploy all test environmentsinto the VPC.
C. Create a new OU in AWS Organizations for testing. Create an AWS CloudFormationtemplate that contains a VPC, necessary networking resources, a transit gatewayattachment, and related routing configurations. Create a CloudFormation stack set thatincludes this template. Use CloudFormation StackSets for deployments into each accountunder the testing 01.1. Create a new account for each test environment.
D. Convert the test environment EC2 instances into Docker images. Use AWSCloudFormation to configure an Amazon Elastic Kubernetes Service (Amazon EKS) clusterin a new VPC, create a transit gateway attachment, and create related routingconfigurations. Use Kubernetes to manage the deployment and lifecycle of the testenvironments.

Question # 71

A company is deploying AWS Lambda functions that access an Amazon RDS forPostgreSQL database. The company needs to launch the Lambda functions in a QAenvironment and in a production environment.The company must not expose credentials within application code and must rotatepasswords automatically.Which solution will meet these requirements?

A. Store the database credentials for both environments in AWS Systems ManagerParameter Store. Encrypt the credentials by using an AWS Key Management Service(AWS KMS) key. Within the application code of the Lambda functions, pull the credentialsfrom the Parameter Store parameter by using the AWS SDK for Python (Bot03). Add a roleto the Lambda functions to provide access to the Parameter Store parameter.
B. Store the database credentials for both environments in AWS Secrets Manager withdistinct key entry for the QA environment and the production environment. Turn on rotation.Provide a reference to the Secrets Manager key as an environment variable for theLambda functions.
C. Store the database credentials for both environments in AWS Key Management Service(AWS KMS). Turn on rotation. Provide a reference to the credentials that are stored inAWS KMS as an environment variable for the Lambda functions.
D. Create separate S3 buckets for the QA environment and the production environment.Turn on server-side encryption with AWS KMS keys (SSE-KMS) for the S3 buckets. Usean object naming pattern that gives each Lambda function's application code the ability topull the correct credentials for the function's corresponding environment. Grant eachLambda function's execution role access to Amazon S3.

Question # 72

A company has a legacy application that runs on multiple .NET Framework components.The components share the same Microsoft SQL Server database andcommunicate with each other asynchronously by using Microsoft Message Queueing(MSMQ).The company is starting a migration to containerized .NET Core components and wants torefactor the application to run on AWS. The .NET Core components require complexorchestration. The company must have full control over networking and host configuration.The application's database model is strongly relational.Which solution will meet these requirements?

A. Host the .NET Core components on AWS App Runner. Host the database on AmazonRDS for SQL Server. Use Amazon EventBridge for asynchronous messaging.
B. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)with the AWS Fargate launch type. Host the database on Amazon DynamoDB. UseAmazon Simple Notification Service (Amazon SNS) for asynchronous messaging.
C. Host the .NET Core components on AWS Elastic Beanstalk. Host the database on Amazon Aurora PostgreSQL Serverless v2. Use Amazon Managed Streaming for ApacheKafka (Amazon MSK) for asynchronous messaging.
D. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)with the Amazon EC2 launch type. Host the database on Amazon Aurora MySQLServerless v2. Use Amazon Simple Queue Service (Amazon SQS) for asynchronousmessaging.

Question # 73

A research company is running daily simul-ations in the AWS Cloud to meet high demand.The simu-lations run on several hundred Amazon EC2 instances that are based onAmazon Linux 2. Occasionally, a simu-lation gets stuck and requires a cloud operationsengineer to solve the problem by connecting to an EC2 instance through SSH.Company policy states that no EC2 instance can use the same SSH key and that allconnections must be logged in AWS CloudTrail.How can a solutions architect meet these requirements?

A. Launch new EC2 instances, and generate an individual SSH key for each instance.Store the SSH key in AWS Secrets Manager. Create a new IAM policy, and attach it to theengineers' IAM role with an Allow statement for the GetSecretValue action. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through anySSH client.
B. Create an AWS Systems Manager document to run commands on EC2 instances to seta new unique SSH key. Create a new IAM policy, and attach it to the engineers' IAM rolewith an Allow statement to run Systems Manager documents. Instruct the engineers to runthe document to set an SSH key and to connect through any SSH client.
C. Launch new EC2 instances without setting up any SSH key for the instances. Set upEC2 Instance Connect on each instance. Create a new IAM policy, and attach it to theengineers' IAM role with an Allow statement for the SendSSHPublicKey action. Instruct theengineers to connect to the instance by using a browser-based SSH client from the EC2console.
D. Set up AWS Secrets Manager to store the EC2 SSH key. Create a new AWS Lambdafunction to create a new SSH key and to call AWS Systems Manager Session Manager toset the SSH key on the EC2 instance. Configure Secrets Manager to use the Lambdafunction for automatic rotation once daily. Instruct the engineers to fetch the SSH key fromSecrets Manager when they connect through any SSH client.

Question # 74

A company wants to migrate its on-premises data center to the AWS Cloud. This includesthousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java andPHP applications with MYSQL, and Oracle databases. There are many dependent serviceshosted either in the same data center or externally.The technical documentation is incomplete and outdated. A solutions architect needs tounderstand the current environment and estimate the cloud resource costs after themigration.Which tools or services should solutions architect use to plan the cloud migration? (Choosethree.)

A. AWS Application Discovery Service
B. AWS SMS
C. AWS x-Ray
D. AWS Cloud Adoption Readiness Tool (CART)
E. Amazon Inspector
F. AWS Migration Hub

Question # 75

A company runs many workloads on AWS and uses AWS Organizations to manage itsaccounts. The workloads are hosted on Amazon EC2. AWS Fargate. and AWS Lambda.Some of the workloads have unpredictable demand. Accounts record high usage in somemonths and low usage in other months.The company wants to optimize its compute costs over the next 3 years A solutionsarchitect obtains a 6-month average for each of the accounts across the organization tocalculate usage.Which solution will provide the MOST cost savings for all the organization's computeusage?

A. Purchase Reserved Instances for the organization to match the size and number of themost common EC2 instances from the member accounts.
B. Purchase a Compute Savings Plan for the organization from the management accountby using the recommendation at the management account level
C. Purchase Reserved Instances for each member account that had high EC2 usageaccording to the data from the last 6 months.
D. Purchase an EC2 Instance Savings Plan for each member account from the management account based on EC2 usage data from the last 6 months.

Question # 76

A solutions architect is determining the DNS strategy for an existing VPC. The VPC isprovisioned to use the 10.24.34.0/24 CIDR block. The VPC also uses Amazon Route 53Resolver for DNS. New requirements mandate that DNS queries must use private hostedzones. Additionally, instances that have public IP addresses must receive correspondingpublic hostnames.Which solution will meet these requirements to ensure that the domain names are correctlyresolved within the VPC?

A. Create a private hosted zone. Activate the enableDnsSupport attribute and theenableDnsHostnames attribute for the VPC. Update the VPC DHCP options set to includedomain-name-servers-10.24.34.2.
B. Create a private hosted zone. Associate the private hosted zone with the VPC. Activatethe enableDnsSupport attribute and the enableDnsHostnames attribute for the VPC.Create a new VPC DHCP options set, and configure domain-nameservers=AmazonProvidedDNS. Associate the new DHCP options set with the VPC.
C. Deactivate the enableDnsSupport attribute for the VPC. Activate theenableDnsHostnames attribute for the VPC. Create a new VPC DHCP options set, andconfigure domain-name-servers=10.24.34.2. Associate the new DHCP options set with theVPC.
D. Create a private hosted zone. Associate the private hosted zone with the VPC. Activatethe enableDnsSupport attribute for the VPC. Deactivate the enableDnsHostnames attributefor the VPC. Update the VPC DHCP options set to include domain-nameservers=AmazonProvidedDNS.

Question # 77

A large company is migrating ils entire IT portfolio to AWS. Each business unit in thecompany has a standalone AWS account that supports both development and testenvironments. New accounts to support production workloads will be needed soon.The finance department requires a centralized method for payment but must maintainvisibility into each group's spending to allocate costs.The security team requires a centralized mechanism to control 1AM usage in all thecompany's accounts.What combination of the following options meet the company's needs with the LEASTeffort? (Select TWO.)

A. Use a collection of parameterized AWS CloudFormation templates defining common1AM permissions that are launched into each account. Require all new and existingaccounts to launch the appropriate stacks to enforce the least privilege model.
B. Use AWS Organizations to create a new organization from a chosen payer account anddefine an organizational unit hierarchy. Invite the existing accounts to join the organizationand create new accounts using Organizations.
C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
D. Enable all features of AWS Organizations and establish appropriate service controlpolicies that filter 1AM permissions for sub-accounts.
E. Consolidate all of the company's AWS accounts into a single AWS account. Use tags forbilling purposes and the lAM's Access Advisor feature to enforce the least privilege model.

Question # 78

An enterprise company is building an infrastructure services platform for its users. Thecompany has the following requirements:Provide least privilege access to users when launching AWS infrastructure sousers cannot provision unapproved services.Use a central account to manage the creation of infrastructure services.Provide the ability to distribute infrastructure services to multiple accounts in AWSOrganizations.Provide the ability to enforce tags on any infrastructure that is started by users.Which combination of actions using AWS services will meet these requirements? (Choosethree.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add thetemplates to a central Amazon S3 bucket and add the-IAM roles or users that requireaccess to the S3 bucket policy.
B. Develop infrastructure services using AWS Cloud Formation templates. Upload eachtemplate as an AWS Service Catalog product to portfolios created in a central AWSaccount. Share these portfolios with the Organizations structure created for the company.
C. Allow user IAM roles to have AWSCloudFormationFullAccess andAmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS accountroot user level to deny all services except AWS CloudFormation and Amazon S3.
D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use anautomation script to import the central portfolios to local AWS accounts, copy theTagOption assign users access and apply launch constraints.
E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required bythe company. Apply the TagOption to AWS Service Catalog products or portfolios.
F. Use the AWS CloudFormation Resource Tags property to enforce the application of tagsto any CloudFormation templates that will be created for users.

Question # 79

A company is migrating a legacy application from an on-premises data center to AWS. Theapplication consists of a single application server and a Microsoft SQL Server database server. Each server is deployed on a VMware VM that consumes 500 TBof data across multiple attached volumes.The company has established a 10 Gbps AWS Direct Connect connection from the closestAWS Region to its on-premises data center. The Direct Connect connection is not currentlyin use by other services.Which combination of steps should a solutions architect take to migrate the application withthe LEAST amount of downtime? (Choose two.)

A. Use an AWS Server Migration Service (AWS SMS) replication job to migrate thedatabase server VM to AWS.
B. Use VM Import/Export to import the application server VM.
C. Export the VM images to an AWS Snowball Edge Storage Optimized device.
D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate theapplication server VM to AWS.
E. Use an AWS Database Migration Service (AWS DMS) replication instance to migratethe database to an Amazon RDS DB instance.

Question # 80

A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for theapplication's database. The DB cluster contains one small primary instance and threelarger replica instances. The application runs on an AWS Lambda function. The applicationmakes many short-lived connections to the database's replica instances to perform readonlyoperations.During periods of high traffic, the application becomes unreliable and the database reportsthat too many connections are being established. The frequency of high-traffic periods isunpredictable.Which solution will improve the reliability of the application?

A. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-onlyendpoint for the proxy. Update the Lambda function to connect to the proxy endpoint.
B. Increase the max_connections setting on the DB cluster's parameter group. Reboot allthe instances in the DB cluster. Update the Lambda function to connect to the DB clusterendpoint.
C. Configure instance scaling for the DB cluster to occur when the DatabaseConnectionsmetric is close to the max _ connections setting. Update the Lambda function to connect tothe Aurora reader endpoint.
D. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-onlyendpoint for the Aurora Data API on the proxy. Update the Lambda function to connect tothe proxy endpoint.

Question # 81

A company is planning to migrate its on-premises transaction-processing application toAWS. The application runs inside Docker containers that are hosted on VMS in thecompany's data center. The Docker containers have shared storage where the applicationrecords transaction data.The transactions are time sensitive. The volume of transactions inside the application isunpredictable. The company must implement a low-latency storage solution that willautomatically scale throughput to meet increased demand. The company cannot developthe application further and cannot continue to administer the Docker hosting environment.How should the company migrate the application to AWS to meet these requirements?

A. Migrate the containers that run the application to Amazon Elastic Kubernetes Service(Amazon EKS). Use Amazon S3 to store the transaction data that the containers share.
B. Migrate the containers that run the application to AWS Fargate for Amazon ElasticContainer Service (Amazon ECS). Create an Amazon Elastic File System (Amazon EFS)file system. Create a Fargate task definition. Add a volume to the task definition to point tothe EFS file system
C. Migrate the containers that run the application to AWS Fargate for Amazon ElasticContainer Service (Amazon ECS). Create an Amazon Elastic Block Store (Amazon EBS)volume. Create a Fargate task definition. Attach the EBS volume to each running task.
D. Launch Amazon EC2 instances. Install Docker on the EC2 instances. Migrate thecontainers to the EC2 instances. Create an Amazon Elastic File System (Amazon EFS) filesystem. Add a mount point to the EC2 instances for the EFS file system.

Question # 82

An online retail company is migrating its legacy on-premises .NET application to AWS. Theapplication runs on load-balanced frontend web servers, load-balanced application servers,and a Microsoft SQL Server database.The company wants to use AWS managed services where possible and does not want torewrite the application. A solutions architect needs to implement a solution to resolve scaling issues and minimize licensing costs as the application scales.Which solution will meet these requirements MOST cost-effectively?

A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application LoadBalancer for the web tier and for the application tier. Use Amazon Aurora PostgreSQL withBabelfish turned on to replatform the SOL Server database.
B. Create images of all the servers by using AWS Database Migration Service (AWSDMS). Deploy Amazon EC2 instances that are based on the on-premises imports. Deploythe instances in an Auto Scaling group behind a Network Load Balancer for the web tierand for the application tier. Use Amazon DynamoDB as the database tier.
C. Containerize the web frontend tier and the application tier. Provision an Amazon ElasticKubernetes Service (Amazon EKS) cluster. Create an Auto Scaling group behind aNetwork Load Balancer for the web tier and for the application tier. Use Amazon RDS forSOL Server to host the database.
D. Separate the application functions into AWS Lambda functions. Use Amazon APIGateway for the web frontend tier and the application tier. Migrate the data to Amazon S3.Use Amazon Athena to query the data.

Question # 83

A company is deploying a third-party web application on AWS. The application is packagedas a Docker image. The company has deployed the Docker image as an AWSFargate service in Amazon Elastic Container Service (Amazon ECS). An Application LoadBalancer (ALB) directs traffic to the application.The company needs to give only a specific list of users the ability to access the applicationfrom the internet. The company cannot change the application and cannot integrate theapplication with an identity provider. All users must be authenticated through multi-factorauthentication (MFA).Which solution will meet these requirements?

A. Create a user pool in Amazon Cognito. Configure the pool for the application. Populatethe pool with the required users. Configure the pool to require MFA. Configure a listenerrule on the ALB to require authentication through the Amazon Cognito hosted UI.
B. Configure the users in AWS Identity and Access Management (IAM). Attach a resourcepolicy to the Fargate service to require users to use MFA. Configure a listener rule on theALB to require authentication through IAM.
C. Configure the users in AWS Identity and Access Management (IAM). Enable AWS IAMIdentity Center (AWS Single Sign-On). Configure resource protection for the ALB. Create a resource protection rule to require users to use MFA.
D. Create a user pool in AWS Amplify. Configure the pool for the application. Populate thepool with the required users. Configure the pool to require MFA. Configure a listener ruleon the ALB to require authentication through the Amplify hosted UI.

Question # 84

A company built an ecommerce website on AWS using a three-tier web architecture. Theapplication is Java-based and composed of an Amazon CloudFront distribution, an Apacheweb server layer of Amazon EC2 instances in an Auto Scaling group, and a backendAmazon Aurora MySQL database.Last month, during a promotional sales event, users reported errors and timeouts whileadding items to their shopping carts. The operations team recovered the logs created bythe web servers and reviewed Aurora DB cluster performance metrics. Some of the webservers were terminated before logs could be collected and the Aurora metrics were notsufficient for query performance analysis.Which combination of steps must the solutions architect take to improve applicationperformance visibility during peak traffic events? (Choose three.)

A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to AmazonCloudWatch Logs.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instancesand implement tracing of SQL queries with the X-Ray SDK for Java.
C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to AmazonKinesis
D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to sendthe Apache logs to CloudWatch Logs.
E. Enable and configure AWS CloudTrail to collect and analyze application activity fromAmazon EC2 and Aurora.
F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream toAWS X-Ray.

Question # 85

A company provides a software as a service (SaaS) application that runs in the AWSCloud. The application runs on Amazon EC2 instances behind a Network Load Balancer(NLB). The instances are in an Auto Scaling group and are distributed across threeAvailability Zones in a single AWS Region.The company is deploying the application into additional Regions. The company mustprovide static IP addresses for the application to customers so that the customers can addthe IP addresses to allow lists.The solution must automatically route customers to the Region that is geographicallyclosest to them.Which solution will meet these requirements?

A. Create an Amazon CloudFront distribution. Create a CloudFront origin group. Add theNLB for each additional Region to the origin group. Provide customers with the IP addressranges of the distribution's edge locations.
B. Create an AWS Global Accelerator standard accelerator. Create a standard acceleratorendpoint for the NLB in each additional Region. Provide customers with the GlobalAccelerator IP address.
C. Create an Amazon CloudFront distribution. Create a custom origin for the NLB in eachadditional Region. Provide customers with the IP address ranges of the distribution's edgelocations.
D. Create an AWS Global Accelerator custom routing accelerator. Create a listener for thecustom routing accelerator. Add the IP address and ports for the NLB in each additionalRegion. Provide customers with the Global Accelerator IP address.

Question # 86

A company has a project that is launching Amazon EC2 instances that are larger thanrequired. The project's account cannot be part of the company's organization in AWSOrganizations due to policy restrictions to keep this activity outside of corporate IT. Thecompany wants to allow only the launch of t3.smallEC2 instances by developers in the project's account. These EC2 instances must berestricted to the us-east-2 Region.What should a solutions architect do to meet these requirements?

A. Create a new developer account. Move all EC2 instances, users, and assets into useast-2. Add the account to the company's organization in AWS Organizations. Enforce a tagging policy that denotes Region affinity.
B. Create an SCP that denies the launch of all EC2 instances except t3.small EC2instances in us-east-2. Attach the SCP to the project's account.
C. Create and purchase a t3.small EC2 Reserved Instance for each developer in us-east-2.Assign each developer a specific EC2 instance with their name as the tag.
D. Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2.Attach the policy to the roles and groups that the developers use in the project's account.

Question # 87

A large company recently experienced an unexpected increase in Amazon RDS andAmazon DynamoDB costs. The company needs to increase visibility into details of AWSBilling and Cost Management There are various accounts associated with AWSOrganizations, including many development and production accounts There is noconsistent tagging strategy across the organization, but there are guidelines in place thatrequire all infrastructure to be deployed using AWS CloudFormation with consistenttagging. Management requires cost center numbers and project ID numbers for all existingand future DynamoDB tables and RDS instances.Which strategy should the solutions architect provide to meet these requirements?

A. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID and allow 24 hours for tags to propagate to existing resources.
B. Use an AWS Config rule to alert the finance team of untagged resources Create acentralized AWS Lambda based solution to tag untagged RDS databases and DynamoDBresources every hour using a cross-account role.
C. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID Use SCPs to restrict resource creation that do not have the costcenter and project ID on the resource.
D. Create cost allocation tags to define the cost center and project ID and allow 24 hoursfor tags to propagate to existing resources Update existing federated roles to restrictprivileges to provision resources that do not include the cost center and project ID on theresource.