• support@dumpspool.com
SPECIAL LIMITED TIME DISCOUNT OFFER. USE DISCOUNT CODE TO GET 20% OFF DP2021

PDF Only

$35.00 Free Updates Upto 90 Days

  • DBS-C01 Dumps PDF
  • 324 Questions
  • Updated On March 16, 2024

PDF + Test Engine

$60.00 Free Updates Upto 90 Days

  • DBS-C01 Question Answers
  • 324 Questions
  • Updated On March 16, 2024

Test Engine

$50.00 Free Updates Upto 90 Days

  • DBS-C01 Practice Questions
  • 324 Questions
  • Updated On March 16, 2024
Check Our Free Amazon DBS-C01 Online Test Engine Demo.

How to pass Amazon DBS-C01 exam with the help of dumps?

DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Amazon DBS-C01 Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.

How Do I Know Amazon DBS-C01 Dumps are Worth it?

Did we mention our latest DBS-C01 Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.

You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Amazon Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!

IT Students Are Using our AWS Certified Database - Specialty Dumps Worldwide!

It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using AWS Certified Database - Specialty Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.

How to Get DBS-C01 Real Exam Dumps?

Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the DBS-C01 exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!

Amazon DBS-C01 Sample Question Answers

Question # 1

A database specialist is designing the database for a software-as-a-service (SaaS) versionof an employee information application. In the current architecture, the change history ofemployee records is stored in a single table in an Amazon RDS for Oracle database.Triggers on the employee table populate the history table with historical records.This architecture has two major challenges. First, there is no way to guarantee that therecords have not been changed in the history table. Second, queries on the history tableare slow because of the large size of the table and the need to run the queries against alarge subset of data in the table.The database specialist must design a solution that prevents modification of the historicalrecords. The solution also must maximize the speed of the queries.Which solution will meet these requirements?

A. Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streamsto keep track of changes. Use DynamoDB Accelerator (DAX) to improve queryperformance.
B. Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB)for historical records and to an Amazon OpenSearch Service domain for queries.
C. Use Amazon Aurora PostgreSQL to store employee record history in a single table. UseAurora Auto Scaling to provision more capacity.
D. Build a solution that uses an Amazon Redshift cluster for historical records. Query theRedshift cluster directly as needed.

Question # 2

A company runs a customer relationship management (CRM) system that is hosted on- premises with a MySQL database as the backend. A custom stored procedure is used tosend email notifications to another system when data is inserted into a table. The companyhas noticed that the performance of the CRM system has decreased due to databasereporting applications used by various teams. The company requires an AWS solution thatwould reduce maintenance, improve performance, and accommodate the email notificationfeature.Which AWS solution meets these requirements?

A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodatethe reporting applications. Configure a stored procedure and an AWS Lambda function thatuses Amazon SES to send email notifications to the other system.
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reportingapplications. Configure Amazon RDS event subscriptions to publish a message to anAmazon SNS topic and subscribe the other system's email address to the topic.
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodatethe reporting applications. Configure Amazon SES integration to send email notifications tothe other system.
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configurea stored procedure and an AWS Lambda function to publish a message to an AmazonSNS topic. Subscribe the other system's email address to the topic.

Question # 3

A company is running a mobile app that has a backend database in Amazon DynamoDB.The app experiences sudden increases and decreases in activity throughout the day. Thecompanys operations team notices that DynamoDB read and write requests are beingthrottled at different times, resulting in a negative customer experienceWhich solution will solve the throttling issue without requiring changes to the app?

A. Add a DynamoD3 table in a secondary AWS Region. Populate the additional table byusing DynamoDB Streams.
B. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
C. use on-demand capacity mode tor the DynamoDB table.
D. use DynamoDB Accelerator (DAX).

Question # 4

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multipleaccounts. The company will initiate each DB instance from an existing Aurora PostgreSQLDB instance that runs in ashared account. The company wants the process to be repeatable in case the companyadds additional accounts in the future. The company also wants to be able to verify ifmanual changes have been madeto the DB instance configurations after the company deploys the DB instances.A database specialist has determined that the company needs to create an AWSCloudFormation template with the necessary configuration to create a DB instance in anaccount by using a snapshot of the existing DB instance to initialize the DB instance. Thecompany will also use the CloudFormation template's parameters to provide key values forthe DB instance creation (account ID, etc.).Which final step will meet these requirements in the MOST operationally efficient way?

A. Create a bash script to compare the configuration to the current DB instanceconfiguration and to report any changes.
B. Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.C. Set up CloudFormation to use drift detection to send notifications if the DB instanceconfigurations have been changed.
D. Create an AWS Lambda function to compare the configuration to the current DBinstance configuration and to report any changes.

Question # 5

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. Adatabase specialist needs to monitor the latency of the database.Which solution will meet this requirement with the LEAST operational overhead?

A. Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWSCloudTrail filters to monitor database performance
B. Install Oracle Statspack. Enable the performance statistics feature to collect, store, anddisplay performance data to monitor database performance.
C. Enable RDS Performance Insights to visualize the database load. Enable EnhancedMonitoring to view how different threads use the CPU
D. Create a new DB parameter group that includes the AllocatedStorage,DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS PerformanceInsights

Question # 6

A company is using an Amazon Aurora PostgreSQL database for a project with agovernment agency. All database communications must be encrypted in transit. All non-SSL/TLS connection requests must be rejected.What should a database specialist do to meet these requirements?

A. Set the rds.force SSI parameter in the DB cluster parameter group to default.
B. Set the rds.force_ssl parameter in the DB cluster parameter group to 1.
C. Set the rds.force_ssl parameter in the DB cluster parameter group to 0.
D. Set the SQLNET.SSL VERSION option in the DB cluster option group to 12.

Question # 7

A company plans to use AWS Database Migration Service (AWS DMS) to migrate itsdatabase from one Amazon EC2 instance to another EC2 instance as a full load task. Thecompany wants the database to be inactive during the migration. The company will use adms.t3.medium instance to perform the migration and will use the default settings for themigration.Which solution will MOST improve the performance of the data migration?

A. Increase the number of tables that are loaded in parallel.
B. Drop all indexes on the source tables.
C. Change the processing mode from the batch optimized apply option to transactionalmode.
D. Enable Multi-AZ on the target database while the full load task is in progress.

Question # 8

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster.The DB cluster uses three read replicas. The primary DB instance is an8XL-sized instance, and the read replicas are each XL-sized instances.Users report that database queries are returning stale data. The replication lag indicatesthat the replicas are 5 minutes behind the primary DB instance. Status queries on thereplicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that theIO_THREAD is 1 binlog behind the primary.Which changes will reduce the lag? (Choose two.)

A. Deploy two additional read replicas matching the existing replica DB instance size.
B. Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add threeAurora Replicas.
C. Move the read replicas to the same Availability Zone as the primary DB instance.
D. Increase the instance size of the primary DB instance within the same instance class.
E. Increase the instance size of the read replicas to the same size and class as the primaryDB instance.

Question # 9

A media company hosts a highly available news website on AWS but needs to improve itspage load time, especially during very popular news releases. Once a news page ispublished, it is very unlikely to change unless an error is identified. The company hasdecided to use Amazon ElastiCache.What is the recommended strategy for this use case?

A. Use ElastiCache for Memcached with write-through and long time to live (TTL)
B. Use ElastiCache for Redis with lazy loading and short time to live (TTL)
C. Use ElastiCache for Memcached with lazy loading and short time to live (TTL)
D. Use ElastiCache for Redis with write-through and long time to live (TTL)

Question # 10

A database specialist needs to enable IAM authentication on an existing Amazon AuroraPostgreSQL DB cluster. The database specialist already has modified the DB clustersettings, has created IAM and database credentials, and has distributed the credentials tothe appropriate users.What should the database specialist do next to establish the credentials for the users touse to log in to the DB cluster?

A. Add the users' IAM credentials to the Aurora cluster parameter group.
B. Run the generate-db-auth-token command with the user names to generate a temporarypassword for the users.
C. Add the users' IAM credentials to the default credential profile, Use the AWSManagement Console to access the DB cluster.
D. Use an AWS Security Token Service (AWS STS) token by sending the IAM access keyand secret key as headers to the DB cluster API endpoint.

Question # 11

An ecommerce company uses Amazon DynamoDB as the backend for its paymentssystem. A new regulation requires the company to log all data access requests for financialaudits. For this purpose, the company plans to use AWS logging and save logs to AmazonS3How can a database specialist activate logging on the database?

A. Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create aDynamoDB stream to monitor data-plane operations. Pass the stream to Amazon KinesisData Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store thedata in an Amazon S3 bucket.
B. Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDBstream to monitor control-plane operations. Pass the stream to Amazon Kinesis DataStreams. Use that stream as a source for Amazon Kinesis Data Firehose to store the datain an Amazon S3 bucket.
C. Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-planeoperations. Use Trail2 to monitor DynamoDB data-plane operations.
D. Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

Question # 12

A startup company in the travel industry wants to create an application that includes apersonal travel assistant to display information for nearby airports based on user location.The application will use Amazon DynamoDB and must be able to access and displayattributes such as airline names, arrival times, and flight numbers. However, the applicationmust not be able to access or display pilot names or passenger counts.Which solution will meet these requirements MOST cost-effectively?

A. Use a proxy tier between the application and DynamoDB to regulate access to specifictables, items, and attributes.
B. Use IAM policies with a combination of IAM conditions and actions to implement finegrainedaccess control.
C. Use DynamoDB resource policies to regulate access to specific tables, items, andattributes.
D. Configure an AWS Lambda function to extract only allowed attributes from tables basedon user profiles.

Question # 13

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that isexperiencing application performance issues due to the addition of new workloads. Thedatabase has 5 ¢’ of storage space with Provisioned IOPS. Amazon CloudWatch metricsshow that the average disk queue depth is greater than200 and that the disk I/O response time is significantly higher than usual.What should the database specialist do to improve the performance of the applicationimmediately?

A. Increase the Provisioned IOPS rate on the storage.
B. Increase the available storage space.
C. Use General Purpose SSD (gp2) storage with burst credits.
D. Create a read replica to offload Read IOPS from the DB instance.

Question # 14

A bike rental company operates an application to track its bikes. The application receiveslocation and condition data from bike sensors. The application also receives rentaltransaction data from the associated mobile app.The application uses Amazon DynamoDB as its database layer. The company hasconfigured DynamoDB with provisioned capacity set to 20% above the expected peak loadof the application. On an average day, DynamoDB used 22 billion read capacity units(RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usagechanges smoothly over the course of the day and is generally shaped like a bell curve. Thetiming and magnitude of peaks vary based on the weather and season, but the generalshape is consistent.Which solution will provide the MOST cost optimization of the DynamoDB database layer?

A. Change the DynamoDB tables to use on-demand capacity.
B. Use AWS Auto Scaling and configure time-based scaling.
C. Enable DynamoDB capacity-based auto scaling.
D. Enable DynamoDB Accelerator (DAX).

Question # 15

A news portal is looking for a data store to store 120 GB of metadata about its posts andcomments. The posts and comments are not frequently looked up or updated. However,occasional lookups are expected to be served with single-digit millisecond latency onaverage.What is the MOST cost-effective solution?

A. Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.
B. Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.
C. Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and useAmazon Athena to query the data.
D. Use Amazon DynamoDB with on-demand capacity mode. Switch the table class toDynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).

Question # 16

A global company is creating an application. The application must be highly available. Thecompany requires an RTO and an RPO of less than 5 minutes. The company needs adatabase that will provide the ability to set up an active-active configuration and near realtimesynchronization of data across tables in multiple AWS Regions. Which solution will meet these requirements?

A. Amazon RDS for MariaDB with cross-Region read replicas
B. Amazon RDS With a Multi-AZ deployment
C. Amazon DynamoDB global tables
D. Amazon DynamoDB With a global secondary index (GSI)

Question # 17

A company uses a large, growing, and high performance on-premises Microsoft SQLServer instance With an Always On availability group cluster size of 120 TIE. The companyuses a third-party backup product that requires system-level access to the databases. Thecompany will continue to use this third-party backup product in the future. The company wants to move the DB cluster to AWS with the least possible downtime anddata loss. The company needs a 2 Gbps connection to sustain Always On asynchronousdata replication between the company's data center and AWS.Which combination of actions should a database specialist take to meet theserequirements? (Select THREE.)

A. Establish an AWS Direct Connect hosted connection between the companfs data centerand AWS
B. Create an AWS Site-to-Site VPN connection between the companVs data center andAWS over the internet
C. Use AWS Database Migration Service (AWS DMS) to migrate the on-premises SQLServer databases to Amazon RDS for SQL Server Configure Always On availability groupsfor SQL Server.
D. Deploy a new SQL Server Always On availability group DB cluster on Amazon EC2Configure Always On distributed availability groups between the on-premises DB clusterand the AWS DB cluster_ Fail over to the AWS DB cluster when it is time to migrate.
E. Grant system-level access to the third-party backup product to perform backups of theAmazon RDS for SQL Server DB instance.
F. Configure the third-party backup product to perform backups of the DB cluster onAmazon EC2.

Question # 18

A company's application team needs to select an AWS managed database service to storeapplication and user data. The application team is familiar with MySQL but is open to newsolutions. The application and user data is stored in 10 tables and is de-normalized. Theapplication will access this data through an API layer using an unique ID in each table. Thecompany expects the traffic to be light at first, but the traffic Will Increase to thousands oftransactions each second within the first year- The database service must support activereadsand writes in multiple AWS Regions at the same time_ Query response times need to beless than 100 ms Which AWS database solution will meet these requirements?

A. Deploy an Amazon RDS for MySQL environment in each Region and leverage AWSDatabase Migration Service (AWS DMS) to set up a multi-Region bidirectional replication
B. Deploy an Amazon Aurora MySOL global database with write forwarding turned on
C. Deploy an Amazon DynamoDB database with global tables
D. Deploy an Amazon DocumentDB global cluster across multiple Regions.

Question # 19

A database specialist wants to ensure that an Amazon Aurora DB cluster is alwaysautomatically upgraded to the most recent minor version available. Noticing that there is anew minor version available, the database specialist has issues an AWS CLI command toenable automatic minor version updates. The command runs successfully, but checking theAurora DB cluster indicates that no update to the Aurora version has been made.What might account for this? (Choose two.)

A. The new minor version has not yet been designated as preferred and requires a manualupgrade.
B. Configuring automatic upgrades using the AWS CLI is not supported. This must beenabled expressly using the AWS Management Console.
C. Applying minor version upgrades requires sufficient free space.
D. The AWS CLI command did not include an apply-immediately parameter.
E. Aurora has detected a breaking change in the new minor version and has automaticallyrejected the upgrade.

Question # 20

A financial services company is using AWS Database Migration Service (AWS OMS) tomigrate Its databases from on-premises to AWS. A database administrator is working onreplicating a database to AWS from on-premises using full load and change data capture(CDC). During the CDC replication, the database administrator observed that the targetlatency was high and slowly increasing-What could be the root causes for this high target latency? (Select TWO.)

A. There was ongoing maintenance on the replication instance
B. The source endpoint was changed by modifyng the task
C. Loopback changes had affected the source and target instances-
D. There was no primary key or index in the target database.
E. There were resource bottlenecks in the replication instance

Question # 21

A company has an Amazon Redshift cluster with database audit logging enabled. Asecurity audit shows that raw SQL statements that run against the Redshift cluster arebeing logged to an Amazon S3 bucket. The security team requires that authentication logsare generated for use in an intrusion detection system (IDS), but the security team does notrequire SQL queries.What should a database specialist do to remediate this issue?

A. Set the parameter to true in the database parameter group.
B. Turn off the query monitoring rule in the Redshift cluster's workload management(WLM).
C. Set the enable_user_activity_logging parameter to false in the database parametergroup.
D. Disable audit logging on the Redshift cluster.

Question # 22

An online retailer uses Amazon DynamoDB for its product catalog and order data. Somepopular items have led to frequently accessed keys in the data, and the company is usingDynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessedkeys. As the number of popular products is growing, the company realizes that more itemsneed to be cached. The company observes a high cache miss rate and needs a solution toaddress this issue.What should a database specialist do to accommodate the changing requirements forDAX?

A. Increase the number of nodes in the existing DAX cluster.
B. Create a new DAX cluster with more nodes. Change the DAX endpoint in the applicationto point to the new cluster.
C. Create a new DAX cluster using a larger node type. Change the DAX endpoint in theapplication to point to the new cluster.
D. Modify the node type in the existing DAX cluster.

Question # 23

A business's production database is hosted on a single-node Amazon RDS for MySQL DBinstance. The database instance is hosted in a United States AWS Region.A week before a significant sales event, a fresh database maintenance update is released.The maintenance update has been designated as necessary. The firm want to minimize thedatabase instance's downtime and requests that a database expert make the databaseinstance highly accessible until the sales event concludes.Which solution will satisfy these criteria?

A. Defer the maintenance update until the sales event is over.
B. Create a read replica with the latest update. Initiate a failover before the sales event.
C. Create a read replica with the latest update. Transfer all read-only traffic to the readreplica during the sales event.
D. Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Question # 24

A company runs online transaction processing (OLTP) workloads on an Amazon RDS forPostgreSQL Multi-AZ DB instance. The company recently conducted tests on the databaseafter business hours, andthe tests generated additional database logs. As a result, free storage of the DB instance islow and is expected to be exhausted in 2 days.The company wants to recover the free storage that the additional logs consumed. Thesolution must not result in downtime for the database.Which solution will meet these requirements?

A. Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save thechanges.
B. Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours fordatabase logs to be deleted.
C. Modify the temp file_limit parameter to a smaller value to reclaim space on the DBinstance.
D. Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to savethe changes.

Question # 25

A company has an existing system that uses a single-instance Amazon DocumentDB (withMongoDB compatibility) cluster. Read requests account for 75% of the system queries.Write requests are expected to increase by 50% after an upcoming global release. Adatabase specialist needs to design a solution that improves the overall databaseperformance without creating additional application overhead.Which solution will meet these requirements?

A. Recreate the cluster with a shared cluster volume. Add two instances to serve both readrequests and write requests.
B. Add one read replica instance. Activate a shared cluster volume. Route all read queriesto the read replica instance.
C. Add one read replica instance. Set the read preference to secondary preferred.
D. Add one read replica instance. Update the application to route all read queries to theread replica instance.

Question # 26

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled.The application has caused a logical corruption of the database, which is causing theapplication to become unresponsive. The specific time of the corruption has beenidentified, and it was within the backup retention period.How should a database specialist recover the database to the most recent point beforecorruption?

A. Use the point-in-time restore capability to restore the DB instance to the specified time.No changes to the application connection string are required.
B. Use the point-in-time restore capability to restore the DB instance to the specified time.Change the application connection string to the new, restored DB instance.
C. Restore using the latest automated backup. Change the application connection string tothe new, restored DB instance.
D. Restore using the appropriate automated backup. No changes to the applicationconnection string are required.

Question # 27

A company is running critical applications on AWS. Most of the application deploymentsuse Amazon Aurora MySQL for the database stack. The company uses AWSCloudFormation to deploy the DB instances.The company's application team recently implemented a CI/CD pipeline. A databaseengineer needs to integrate the database deployment CloudFormation stack with the newlybuilt CllCD platform. Updates to the CloudFormation stack must not update existingproduction database resources.Which CloudFormation stack policy action should the database engineer implement tomeet these requirements?

A. Use a Deny statement for the Update:Modify action on the production databaseresources.
B. Use a Deny statement for the action on the production database resources.
C. Use a Deny statement for the Update:Delete action on the production databaseresources.
D. Use a Deny statement for the Update:Replace action on the production databaseresources.

Question # 28

A gaming company is building a mobile game that will have as many as 25,000 activeconcurrent users in the first 2 weeks after launch. The game has a leaderboard that showsthe 10 highest scoring players over the last 24 hours. The leaderboard calculations areprocessed by an AWS Lambda function, which takes about 10 seconds. The companywants the data on the leaderboard to be no more than 1 minute old.Which architecture will meet these requirements in the MOST operationally efficient way?

A. Deliver the player data to an Amazon Timestream database. Create an AmazonElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis.Create a scheduled event with Amazon EventBridge to invoke the Lambda function onceevery minute. Reconfigure the game server to query the Redis cluster for the leaderboarddata.
B. Deliver the player data to an Amazon Timestream database. Create an AmazonDynamoDB table. Configure the Lambda function to store the results in DynamoDB. Createa scheduled event with Amazon EventBridge to invoke the Lambda function once everyminute. Reconfigure the game server to query the DynamoDB table for the leaderboarddata.
C. Deliver the player data to an Amazon Aurora MySQL database. Create an AmazonDynamoDB table. Configure the Lambda function to store the results in MySQL. Create ascheduled event with Amazon EventBridge to invoke the Lambda function once everyminute. Reconfigure the game server to query the DynamoDB table for the leaderboarddata.
D. Deliver the player data to an Amazon Neptune database. Create an AmazonElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis.Create a scheduled event with Amazon EventBridge to invoke the Lambda function onceevery minute. Reconfigure the game server to query the Redis cluster for the leaderboarddata.

Question # 29

In one AWS account, a business runs a two-tier ecommerce application. An Amazon RDSfor MySQL Multi-AZ database instance serves as the application's backend. A developerremoved the database instance in the production environment by accident. Although theorganization recovers the database, the incident results in hours of outage and financialloss.Which combination of adjustments would reduce the likelihood that this error will occuragain in the future? (Select three.)

A. Grant least privilege to groups, IAM users, and roles.
B. Allow all users to restore a database from a backup.
C. Enable deletion protection on existing production DB instances.
D. Use an ACL policy to restrict users from DB instance deletion.
E. Enable AWS CloudTrail logging and Enhanced Monitoring.

Question # 30

Application developers have reported that an application is running slower as more usersare added. The application database is running on an Amazon AuroraDB cluster with an Aurora Replica. The application is written to take advantage of readscaling through reader endpoints. A database specialist looks at the performance metricsof the database and determines that, as new users were added to the database, theprimary instance CPU utilization steadily increased while the Aurora Replica CPU utilizationremained steady.How can the database specialist improve database performance while ensuring minimaldowntime?

A. Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then,reduce the number of replicas once the application meets service level objectives.
B. Modify the primary instance to a larger instance size that offers more CPU capacity.
C. Modify a replica to a larger instance size that has more CPU capacity. Then, promotethe modified replica.
D. Restore the Aurora DB cluster to one that has an instance size with more CPU capacity.Then, swap the names of the old and new DB clusters.

Question # 31

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of itsmobile application. The application is running continuously and a database specialist issatisfied with high availability and fast failover, but is concerned about performancedegradation after failover.How can the database specialist minimize the performance degradation after failover?

A. Enable cluster cache management for the Aurora DB cluster and set the promotionpriority for the writer DB instance and replica to tier-0
B. Enable cluster cache management tor the Aurora DB cluster and set the promotionpriority for the writer DB instance and replica to tier-1
C. Enable Query Plan Management for the Aurora DB cluster and perform a manual plancapture
D. Enable Query Plan Management for the Aurora DB cluster and force the query optimizerto use the desired plan

Question # 32

A large financial services company uses Amazon ElastiCache for Redis for its newapplication that has a global user base. A database administrator must develop a cachingsolution that will be availableacross AWS Regions and include low-latency replication and failover capabilities fordisaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers. Which solution meets these requirements with the LEAST amount of operational effort?

A. Enable cluster mode in ElastiCache for Redis. Then create multiple clusters acrossRegions and replicate the cache data by using AWS Database Migration Service (AWSDMS). Promote a cluster in the failover Region to handle production traffic when DR isrequired.
B. Create a global datastore in ElastiCache for Redis. Then create replica clusters in twoother Regions. Promote one of the replica clusters as primary when DR is required.
C. Disable cluster mode in ElastiCache for Redis. Then create multiple replication groupsacross Regions and replicate the cache data by using AWS Database Migration Service(AWS DMS). Promote a replication group in the failover Region to primary when DR isrequired.
D. Create a snapshot of ElastiCache for Redis in the primary Region and copy it to thefailover Region. Use the snapshot to restore the cluster from the failover Region when DRis required.

Question # 33

A company is planning to migrate a 40 TB Oracle database to an Amazon AuroraPostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS)task within a single replication instance. During early testing, AWS DMS is not scaling tothe company's needs. Full load and change data capture (CDC) are taking days tocomplete.The source database server and the target DB cluster have enough network bandwidth andCPU bandwidth for the additional workload. The replication instance has enough resourcesto support the replication. A database specialist needs to improve database performance,reduce data migration time, and create multiple DMS tasks.Which combination of changes will meet these requirements? (Choose two.)

A. Increase the value of the ParallelLoadThreads parameter in the DMS task settings forthe tables.
B. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasksparameter to a higher value.
C. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasksparameter to a lower value.
D. Use parallel load with different data boundaries for larger tables.
E. Run the DMS tasks on a larger instance class. Increase local storage on the instance.

Question # 34

An ecommerce company is running Amazon RDS for Microsoft SQL Server. The companyis planning to perform testing in a development environment with production data. Thedevelopment environment and the production environment are in separate AWS accounts.Both environments use AWS Key Management Service (AWS KMS) encrypted databaseswith both manual and automated snapshots. A database specialist needs to share a KMSencrypted production RDS snapshot with the development account.Which combination of steps should the database specialist take to meet theserequirements? (Select THREE.)

A. Create an automated snapshot. Share the snapshot from the production account to thedevelopment account.
B. Create a manual snapshot. Share the snapshot from the production account to thedevelopment account.
C. Share the snapshot that is encrypted by using the development account default KMSencryption key.
D. Share the snapshot that is encrypted by using the production account custom KMSencryption key.
E. Allow the development account to access the production account KMS encryption key.
F. Allow the production account to access the development account KMS encryption key.

Question # 35

A database specialist needs to replace the encryption key for an Amazon RDS DBinstance. The database specialist needs to take immediate action to ensure security of thedatabase.Which solution will meet these requirements?

A. Modify the DB instance to update the encryption key. Perform this update immediatelywithout waiting for the next scheduled maintenance window.
B. Export the database to an Amazon S3 bucket. Import the data to an existing DBinstance by using the export file. Specify a new encryption key during the import process.
C. Create a manual snapshot of the DB instance. Create an encrypted copy of thesnapshot by using a new encryption key. Create a new DB instance from the encryptedsnapshot.
D. Create a manual snapshot of the DB instance. Restore the snapshot to a new DBinstance. Specify a new encryption key during the restoration process.

Question # 36

A healthcare company is running an application on Amazon EC2 in a public subnet andusing Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An auditreveals that the traffic betweenthe application and Amazon DocumentDB is not encrypted and that the DocumentDBcluster is not encrypted at rest. A database specialist must correct these issues and ensurethat the data in transit and thedata at rest are encrypted.Which actions should the database specialist take to meet these requirements? (SelectTWO.)

A. Download the SSH RSA public key for Amazon DocumentDB. Update the applicationconfiguration to use the instance endpoint instead of the cluster endpoint and run queriesover SSH.
B. Download the SSL .pem public key for Amazon DocumentDB. Add the key to theapplication package and make sure the application is using the key while connecting to thecluster.
C. Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as anew cluster with the —storage-encrypted parameter set to true. Update the application topoint to the new cluster.
D. Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to theAmazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only theapplication instance's security group to connect.
E. Activate encryption at rest using the modify-db-cluster command with the —storageencryptedparameter set to true. Set the security group of the cluster to allow only theapplication instance's security group to connect.

Question # 37

A database specialist needs to move an Amazon ROS DB instance from one AWS accountto another AWS account.Which solution will meet this requirement with the LEAST operational effort?

A. Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from thesource AWS account to the destination AWS account.
B. Create a DB snapshot of the DB instance. Share the snapshot With the destination AWSaccount Create a new DB instance by restoring the snapshot in the destination AWSaccount
C. Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DBinstance in the source AWS account. use the read replica to replicate the data into the DBinstance in the destination AWS account
D. Use AWS DataSync to back up the DB instance in the source AWS account Use AWSResource Access Manager (AWS RAM) to restore the backup in the destination AWSaccount.

Question # 38

A development team at an international gaming company is experimenting with AmazonDynamoDB to store in-game events for three mobile games. The most popular game hostsa maximum of 500,000 concurrent users, and the least popular game hosts a maximum of10,000 concurrent users. The average size of an event is 20 KB, and the average usersession produces one event each second. Each event is tagged with a time in millisecondsand a globally unique identifier.The lead developer created a single DynamoDB table for the events with the followingschema:Partition key: game nameSort key: event identifierLocal secondary index: player identifierEvent timeThe tests were successful in a small-scale development environment. However, whendeployed to production, new events stopped being added to the table and the logs showDynamoDB failures with the ItemCollectionSizeLimitExceededException error code. Which design change should a database specialist recommend to the development team?

A. Use the player identifier as the partition key. Use the event time as the sort key. Add aglobal secondary index with the game name as the partition key and the event time as thesort key.
B. Create two tables. Use the game name as the partition key in both tables. Use the eventtime as the sort key for the first table. Use the player identifier as the sort key for thesecond table.
C. Replace the sort key with a compound value consisting of the player identifier collatedwith the event time, separated by a dash. Add a local secondary index with the playeridentifier as the sort key.
D. Create one table for each game. Use the player identifier as the partition key. Use theevent time as the sort key.

Question # 39

An online advertising website uses an Amazon DynamoDB table with on-demand capacitymode as its data store. The website also has a DynamoDB Accelerator(DAX) cluster in the same VPC as its web application server. The application needs toperform infrequent writes and many strongly consistent reads from the data store byquerying the DAX cluster.During a performance audit, a systems administrator notices that the application can lookup items by using the DAX cluster. However, the QueryCacheHits metric for the DAXcluster consistently shows 0 while the QueryCacheMisses metric continuously keepsgrowing in Amazon CloudWatch.What is the MOST likely reason for this occurrence?

A. A VPC endpoint was not added to access DynamoDB.
B. Strongly consistent reads are always passed through DAX to DynamoDB.
C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
D. A VPC endpoint was not added to access CloudWatch.

Question # 40

A company is using AWS CloudFormation to provision and manage infrastructureresources, including a production database. During a recent CloudFormation stack update,a database specialist observed that changes were made to a database resource that isnamed ProductionDatabase. The company wants to prevent changes to onlyProductionDatabase during future stack updates.Which stack policy will meet this requirement?

A. Option A
B. Option B
C. Option C
D. Option D

Question # 41

A software company is conducting a security audit of its three-node Amazon AuroraMySQL DB cluster.Which finding is a security concern that needs to be addressed?

A. The AWS account root user does not have the minimum privileges required for clientapplications.
B. Encryption in transit is not configured for all Aurora native backup processes.
C. Each Aurora DB cluster node is not in a separate private VPC with restricted access.
D. The IAM credentials used by the application are not rotated regularly.

Question # 42

A company wants to improve its ecommerce website on AWS. A database specialistdecides to add Amazon ElastiCache for Redis in the implementation stack to ease theworkload off the database and shorten the website response times. The databasespecialist must also ensure the ecommerce website is highly available within thecompany's AWS Region.How should the database specialist deploy ElastiCache to meet this requirement?

A. Launch an ElastiCache for Redis cluster using the AWS CLI with the -cluster-enabledswitch.
B. Launch an ElastiCache for Redis cluster and select read replicas in different AvailabilityZones.
C. Launch two ElastiCache for Redis clusters in two different Availability Zones. ConfigureRedis streams to replicate the cache from the primary cluster to another.
D. Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster'ssnapshot to a different Availability Zone during disaster recovery.

Question # 43

A database specialist wants to ensure that an Amazon Aurora DB cluster is alwaysautomatically upgraded to the most recent minor version available. Noticing that there is anew minor version available, the database specialist has issues an AWS CLI command toenable automatic minor version updates. The command runs successfully, but checking theAurora DB cluster indicates that no update to the Aurora version has been made.What might account for this? (Choose two.)

A. The new minor version has not yet been designated as preferred and requires a manualupgrade.
B. Configuring automatic upgrades using the AWS CLI is not supported. This must beenabled expressly using the AWS Management Console.
C. Applying minor version upgrades requires sufficient free space.
D. The AWS CLI command did not include an apply-immediately parameter.
E. Aurora has detected a breaking change in the new minor version and has automaticallyrejected the upgrade.

Question # 44

A company has more than 100 AWS accounts that need Amazon RDS instances. Thecompany wants to build an automated solution to deploy the RDS instances with specificcompliance parameters. The data does not need to be replicated. The company needs tocreate the databases within 1 dayWhich solution will meet these requirements in the MOST operationally efficient way?

A. Create RDS resources by using AWS CloudFormation. Share the CloudFormationtemplate with each account.
B. Create an RDS snapshot. Share the snapshot With each account Deploy the snapshotinto each account
C. use AWS CloudFormation to create RDS instances in each account. Run AWSDatabase Migration Service (AWS DMS) replication to each ot the created instances.
D. Create a script by using the AWS CLI to copy the ROS instance into the other accountsfrom a template account.

Question # 45

A company runs an ecommerce application on premises on Microsoft SQL Server. Thecompany is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.The company wants to minimize database server maintenance and operating costs afterthe migration is completed. The company also wants to minimize the need to rewrite codeas part of the migration effort.Which solution will meet these requirements?

A. Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
B. Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for queryprocessing.
C. Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
D. Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Question # 46

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporatepolicy requires that the company's data be encrypted at rest with customer managed keys.The company's disaster recovery plan requires that backups of the cluster be copied intoanother AWS Region on a regular basis.How should a database specialist automate the process of backing up the cluster data incompliance with these policies?

A. Copy the AWS Key Management Service (AWS KMS) customer managed key from thesource Region to the destination Region. Set up an AWS Glue job in the source Region tocopy the latest snapshot of the Amazon Redshift cluster from the source Region to thedestination Region. Use a time-based schedule in AWS Glue to run the job on a dailybasis.
B. Create a new AWS Key Management Service (AWS KMS) customer managed key inthe destination Region. Create a snapshot copy grant in the destination Region specifyingthe new key. In the source Region, configure cross-Region snapshots for the AmazonRedshift cluster specifying the destination Region, the snapshot copy grant, and retentionperiods for the snapshot.
C. Copy the AWS Key Management Service (AWS KMS) customer-managed key from thesource Region to the destination Region. Create Amazon S3 buckets in each Region usingthe keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatchEvents) to schedule an AWS Lambda function in the source Region to copy the latestsnapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copythe snapshots to the destination Region, specifying the source and destination KMS keyIDs in the replication configuration.
D. Use the same customer-supplied key materials to create a CMK with the same privatekey in the destination Region. Configure cross-Region snapshots in the source Regiontargeting the destination Region. Specify the corresponding CMK in the destination Regionto encrypt the snapshot.

Question # 47

A database specialist needs to move a table from a database that is running on an AmazonAurora PostgreSQL DB cluster into a new and distinct database cluster. The new table inthe new database must be updated with any changes to the original table that happenwhile the migration is in progress.The original table contains a column to store data as large as 2 GB in the form of largebinary objects (LOBs). A few records are large in size, but most of the LOB data is smallerthan 32 KB.What is the FASTEST way to replicate all the data from the original table?

A. Use AWS Database Migration Service (AWS DMS) with ongoing replication in full LOBmode.
B. Take a snapshot of the database. Create a new DB instance by using the snapshot.
C. Use AWS Database Migration Service (AWS DMS) with ongoing replication in limitedLOB mode.
D. Use AWS Database Migration Service (AWS DMS) with ongoing replication in inlineLOB mode.

Question # 48

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine. The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO. Which solution will meet these requirements?

A. Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover. 
B. Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server. 
C. Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region. 
D. Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region. 

Question # 49

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective. Which solution meets these requirements?

A. Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint. 
B. Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint. 
C. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint. 
D. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint. 

Question # 50

A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office. The dashboard loads more slowly in the Singapore branch office than in the United States branch office. The company needs a solution so that the dashboard loads consistently for users in each location. Which solution will meet these requirements in the MOST operationally efficient way?

A. Take a snapshot of the DB instance in us-west-2. Create a new DB instance in apsoutheast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance. 
B. Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica. 
C. Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in apsoutheast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance. 
D. Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1. 

Question # 51

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis. Which solution meets these requirements?

A. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs. 
B. Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs. 
C. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs. 
D. Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster. 

Question # 52

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to RDS Data API is private and never passes through the public internet. What should the database specialist do to meet this requirement?

A. Modify the Aurora Serverless cluster by selecting a VPC with private subnets. 
B. Modify the Aurora Serverless cluster by unchecking the publicly accessible option. 
C. Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API. 
D. Create a gateway VPC endpoint for RDS Data API. 

Question # 53

A company runs a customer relationship management (CRM) system that is hosted onpremises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature. Which AWS solution meets these requirements?

A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system. 
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic. 
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system. 
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic. 

Question # 54

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location. A database specialist must use encryption to ensure that the credentials are not visible in the source code. Which solution will meet these requirements?

A. Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption. 
B. Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager. 
C. Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager. 
D. Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates. 

Question # 55

Developers have requested a new Amazon Redshift cluster so they can load new thirdparty marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message: “Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.” The developers need to load this data soon, so a database specialist must act quickly to solve this issue. What is the MOST secure solution?

A. Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action. 
B. Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role. 
C. Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message. 
D. Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created. 

Question # 56

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC. Which solutions will reduce this latency? (Choose two.)

A. Configure the DMS task to run in full large binary object (LOB) mode. 
B. Configure the DMS task to run in limited large binary object (LOB) mode. 
C. Create a Multi-AZ replication instance. 
D. Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions. 
E. Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions. 

Question # 57

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover. Which solution on AWS will meet these requirements with the LEAST operational overhead?

A. Deploy an Amazon RDS DB instance with a read replica. 
B. Deploy an Amazon RDS Multi-AZ DB instance. 
C. Deploy Amazon DynamoDB global tables. 
D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured. 

Question # 58

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console. What should the database specialist do to resolve this? 

A. Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups. 
B. Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region. 
C. Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account. 
D. Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account. 

Question # 59

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most costeffective solution that will automatically scale and is highly available. Which solution meets these requirements?

A. Amazon DynamoDB with on-demand capacity mode 
B. Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled 
C. Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs) 
D. Amazon Aurora with one writer node and two cross-Region Aurora Replicas 

Question # 60

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are welldefined. The service has an availability target of 99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data. Which database solution meets these requirements at the LOWEST cost?

A. Amazon Neptune 
B. Amazon Aurora PostgreSQL Serverless 
C. Amazon RDS for PostgreSQL 
D. Amazon DynamoDB 

Question # 61

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season. Which solution satisfies these criteria at the lowest possible cost?

A. DynamoDB Streams 
B. DynamoDB with DynamoDB Accelerator 
C. DynamoDB with on-demand capacity mode 
D. DynamoDB with provisioned capacity mode with Auto Scaling 

Question # 62

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table. The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application. Which solution will meet these requirements? 

A. Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.
 B. Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table. 
C. Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application. 
D. Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure. 

Question # 63

A finance company migrated its 3 ¢’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime. Which solution will meet these requirements? 

A. Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately. 
B. Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster. 
C. Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master. 
D. Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster. 

Question # 64

An internet advertising firm stores its data in an Amazon DynamoDb table. Amazon DynamoDB Streams are enabled on the table, and one of the keys has a global secondary index. The table is encrypted using a customer-managed AWS Key Management Service (AWS KMS) key. The firm has chosen to grow worldwide and want to duplicate the database using DynamoDB global tables in a new AWS Region. An administrator observes the following upon review: No role with the dynamodb: CreateGlobalTable permission exists in the account. An empty table with the same name exists in the new Region where replication is desired. A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired. Which settings will prevent you from creating a global table or replica in the new Region? (Select two.)

A. A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired. 
B. An empty table with the same name exists in the Region where replication is desired. 
C. No role with the dynamodb:CreateGlobalTable permission exists in the account. 
D. DynamoDB Streams is enabled for the table. 
E. The table is encrypted using a KMS customer managed key. 

Question # 65

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials. Which combination of steps should a database specialist take to meet this requirement? (Choose three.) 

A. Extend the on-premises Active Directory to AWS by using AD Connector. 
B. Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy. 
C. Create a directory by using AWS Directory Service for Microsoft Active Directory. 
D. Create an Active Directory domain controller on Amazon EC2. 
E. Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy. 
F. Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.