• support@dumpspool.com
SPECIAL LIMITED TIME DISCOUNT OFFER. USE DISCOUNT CODE TO GET 20% OFF DP2021

PDF Only

$35.00 Free Updates Upto 90 Days

  • MLS-C01 Dumps PDF
  • 281 Questions
  • Updated On March 25, 2024

PDF + Test Engine

$60.00 Free Updates Upto 90 Days

  • MLS-C01 Question Answers
  • 281 Questions
  • Updated On March 25, 2024

Test Engine

$50.00 Free Updates Upto 90 Days

  • MLS-C01 Practice Questions
  • 281 Questions
  • Updated On March 25, 2024
Check Our Free Amazon MLS-C01 Online Test Engine Demo.

How to pass Amazon MLS-C01 exam with the help of dumps?

DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Amazon MLS-C01 Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.

How Do I Know Amazon MLS-C01 Dumps are Worth it?

Did we mention our latest MLS-C01 Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.

You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Amazon Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!

IT Students Are Using our AWS Certified Machine Learning - Specialty Dumps Worldwide!

It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using AWS Certified Machine Learning - Specialty Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.

How to Get MLS-C01 Real Exam Dumps?

Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the MLS-C01 exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!

123456

Amazon MLS-C01 Sample Question Answers

Question # 1

A data scientist stores financial datasets in Amazon S3. The data scientist uses AmazonAthena to query the datasets by using SQL.The data scientist uses Amazon SageMaker to deploy a machine learning (ML) model. Thedata scientist wants to obtain inferences from the model at the SageMaker endpointHowever, when the data …. ntist attempts to invoke the SageMaker endpoint, the datascientist receives SOL statement failures The data scientist's 1AM user is currently unableto invoke the SageMaker endpointWhich combination of actions will give the data scientist's 1AM user the ability to invoke the SageMaker endpoint? (Select THREE.)

A. Attach the AmazonAthenaFullAccess AWS managed policy to the user identity.
B. Include a policy statement for the data scientist's 1AM user that allows the 1AM user toperform the sagemaker: lnvokeEndpoint action,
C. Include an inline policy for the data scientist’s 1AM user that allows SageMaker to readS3 objects
D. Include a policy statement for the data scientist's 1AM user that allows the 1AM user toperform the sagemakerGetRecord action.
E. Include the SQL statement "USING EXTERNAL FUNCTION ml_function_name" in theAthena SQL query.
F. Perform a user remapping in SageMaker to map the 1AM user to another 1AM user thatis on the hosted endpoint.

Question # 2

A Machine Learning Specialist is designing a scalable data storage solution for AmazonSageMaker. There is an existing TensorFlow-based model implemented as a train.py scriptthat relies on static training data that is currently stored as TFRecords.Which method of providing training data to Amazon SageMaker would meet the businessrequirements with the LEAST development overhead?

A. Use Amazon SageMaker script mode and use train.py unchanged. Point the AmazonSageMaker training invocation to the local path of the data without reformatting the trainingdata.
B. Use Amazon SageMaker script mode and use train.py unchanged. Put the TFRecorddata into an Amazon S3 bucket. Point the Amazon SageMaker training invocation to the S3bucket without reformatting the training data.
C. Rewrite the train.py script to add a section that converts TFRecords to protobuf andingests the protobuf data instead of TFRecords.
D. Prepare the data in the format accepted by Amazon SageMaker. Use AWS Glue orAWS Lambda to reformat and store the data in an Amazon S3 bucket.

Question # 3

A credit card company wants to identify fraudulent transactions in real time. A data scientistbuilds a machine learning model for this purpose. The transactional data is captured andstored in Amazon S3. The historic data is already labeled with two classes: fraud (positive)and fair transactions (negative). The data scientist removes all the missing data and buildsa classifier by using the XGBoost algorithm in Amazon SageMaker. The model producesthe following results:• True positive rate (TPR): 0.700• False negative rate (FNR): 0.300• True negative rate (TNR): 0.977• False positive rate (FPR): 0.023• Overall accuracy: 0.949Which solution should the data scientist use to improve the performance of the model?

A. Apply the Synthetic Minority Oversampling Technique (SMOTE) on the minority class inthe training dataset. Retrain the model with the updated training data.
B. Apply the Synthetic Minority Oversampling Technique (SMOTE) on the majority class in the training dataset. Retrain the model with the updated training data.
C. Undersample the minority class.
D. Oversample the majority class.

Question # 4

A pharmaceutical company performs periodic audits of clinical trial sites to quickly resolvecritical findings. The company stores audit documents in text format. Auditors haverequested help from a data science team to quickly analyze the documents. The auditorsneed to discover the 10 main topics within the documents to prioritize and distribute thereview work among the auditing team members. Documents that describe adverse eventsmust receive the highest priority. A data scientist will use statistical modeling to discover abstract topics and to provide a listof the top words for each category to help the auditors assess the relevance of the topic.Which algorithms are best suited to this scenario? (Choose two.)

A. Latent Dirichlet allocation (LDA)
B. Random Forest classifier
C. Neural topic modeling (NTM)
D. Linear support vector machine
E. Linear regression

Question # 5

A media company wants to create a solution that identifies celebrities in pictures that usersupload. The company also wants to identify the IP address and the timestamp details fromthe users so the company can prevent users from uploading pictures from unauthorizedlocations.Which solution will meet these requirements with LEAST development effort?

A. Use AWS Panorama to identify celebrities in the pictures. Use AWS CloudTrail tocapture IP address and timestamp details.
B. Use AWS Panorama to identify celebrities in the pictures. Make calls to the AWSPanorama Device SDK to capture IP address and timestamp details.
C. Use Amazon Rekognition to identify celebrities in the pictures. Use AWS CloudTrail tocapture IP address and timestamp details.
D. Use Amazon Rekognition to identify celebrities in the pictures. Use the text detectionfeature to capture IP address and timestamp details.

Question # 6

A retail company stores 100 GB of daily transactional data in Amazon S3 at periodicintervals. The company wants to identify the schema of the transactional data. Thecompany also wants to perform transformations on the transactional data that is in AmazonS3.The company wants to use a machine learning (ML) approach to detect fraud in thetransformed data.Which combination of solutions will meet these requirements with the LEAST operationaloverhead? {Select THREE.)

A. Use Amazon Athena to scan the data and identify the schema.
B. Use AWS Glue crawlers to scan the data and identify the schema.
C. Use Amazon Redshift to store procedures to perform data transformations
D. Use AWS Glue workflows and AWS Glue jobs to perform data transformations.
E. Use Amazon Redshift ML to train a model to detect fraud.
F. Use Amazon Fraud Detector to train a model to detect fraud.

Question # 7

An automotive company uses computer vision in its autonomous cars. The companytrained its object detection models successfully by using transfer learning from aconvolutional neural network (CNN). The company trained the models by using PyTorch through the Amazon SageMaker SDK.The vehicles have limited hardware and compute power. The company wants to optimizethe model to reduce memory, battery, and hardware consumption without a significantsacrifice in accuracy.Which solution will improve the computational efficiency of the models?

A. Use Amazon CloudWatch metrics to gain visibility into the SageMaker training weights,gradients, biases, and activation outputs. Compute the filter ranks based on the traininginformation. Apply pruning to remove the low-ranking filters. Set new weights based on thepruned set of filters. Run a new training job with the pruned model.
B. Use Amazon SageMaker Ground Truth to build and run data labeling workflows. Collecta larger labeled dataset with the labelling workflows. Run a new training job that uses thenew labeled data with previous training data.
C. Use Amazon SageMaker Debugger to gain visibility into the training weights, gradients,biases, and activation outputs. Compute the filter ranks based on the training information.Apply pruning to remove the low-ranking filters. Set the new weights based on the prunedset of filters. Run a new training job with the pruned model.
D. Use Amazon SageMaker Model Monitor to gain visibility into the ModelLatency metricand OverheadLatency metric of the model after the company deploys the model. Increasethe model learning rate. Run a new training job.

Question # 8

A media company is building a computer vision model to analyze images that are on socialmedia. The model consists of CNNs that the company trained by using images that thecompany stores in Amazon S3. The company used an Amazon SageMaker training job inFile mode with a single Amazon EC2 On-Demand Instance.Every day, the company updates the model by using about 10,000 images that thecompany has collected in the last 24 hours. The company configures training with only oneepoch. The company wants to speed up training and lower costs without the need to makeany code changes.Which solution will meet these requirements?

A. Instead of File mode, configure the SageMaker training job to use Pipe mode. Ingest thedata from a pipe.
B. Instead Of File mode, configure the SageMaker training job to use FastFile mode withno Other changes.
C. Instead Of On-Demand Instances, configure the SageMaker training job to use SpotInstances. Make no Other changes.
D. Instead Of On-Demand Instances, configure the SageMaker training job to use SpotInstances. Implement model checkpoints.

Question # 9

A data scientist is building a forecasting model for a retail company by using the mostrecent 5 years of sales records that are stored in a data warehouse. The dataset containssales records for each of the company's stores across five commercial regions The datascientist creates a working dataset with StorelD. Region. Date, and Sales Amount ascolumns. The data scientist wants to analyze yearly average sales for each region. Thescientist also wants to compare how each region performed compared to average salesacross all commercial regions.Which visualization will help the data scientist better understand the data trend?

A. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each store. Create a bar plot, faceted by year, of average sales foreach store. Add an extra bar in each facet to represent average sales.
B. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each store. Create a bar plot, colored by region and faceted by year,of average sales for each store. Add a horizontal line in each facet to represent averagesales.
C. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each region Create a bar plot of average sales for each region. Addan extra bar in each facet to represent average sales.
D. Create an aggregated dataset by using the Pandas GroupBy function to get average sales for each year for each region Create a bar plot, faceted by year, of average sales foreach region Add a horizontal line in each facet to represent average sales.

Question # 10

A data scientist is training a large PyTorch model by using Amazon SageMaker. It takes 10hours on average to train the model on GPU instances. The data scientist suspects thattraining is not converging and thatresource utilization is not optimal.What should the data scientist do to identify and address training issues with the LEASTdevelopment effort?

A. Use CPU utilization metrics that are captured in Amazon CloudWatch. Configure aCloudWatch alarm to stop the training job early if low CPU utilization occurs.
B. Use high-resolution custom metrics that are captured in Amazon CloudWatch. Configurean AWS Lambda function to analyze the metrics and to stop the training job early if issuesare detected.
C. Use the SageMaker Debugger vanishing_gradient and LowGPUUtilization built-in rulesto detect issues and to launch the StopTrainingJob action if issues are detected.
D. Use the SageMaker Debugger confusion and feature_importance_overweight built-inrules to detect issues and to launch the StopTrainingJob action if issues are detected.

Question # 11

A company builds computer-vision models that use deep learning for the autonomousvehicle industry. A machine learning (ML) specialist uses an Amazon EC2 instance thathas a CPU: GPU ratio of 12:1 to train the models.The ML specialist examines the instance metric logs and notices that the GPU is idle half ofthe time The ML specialist must reduce training costs without increasing the duration of thetraining jobs.Which solution will meet these requirements?

A. Switch to an instance type that has only CPUs.
B. Use a heterogeneous cluster that has two different instances groups.
C. Use memory-optimized EC2 Spot Instances for the training jobs.
D. Switch to an instance type that has a CPU GPU ratio of 6:1.

Question # 12

An engraving company wants to automate its quality control process for plaques. Thecompany performs the process before mailing each customized plaque to a customer. Thecompany has created an Amazon S3 bucket that contains images of defects that shouldcause a plaque to be rejected. Low-confidence predictions must be sent to an internal teamof reviewers who are using Amazon Augmented Al (Amazon A2I).Which solution will meet these requirements?

A. Use Amazon Textract for automatic processing. Use Amazon A2I with AmazonMechanical Turk for manual review.
B. Use Amazon Rekognition for automatic processing. Use Amazon A2I with a privateworkforce option for manual review.
C. Use Amazon Transcribe for automatic processing. Use Amazon A2I with a privateworkforce option for manual review.
D. Use AWS Panorama for automatic processing Use Amazon A2I with AmazonMechanical Turk for manual review

Question # 13

An Amazon SageMaker notebook instance is launched into Amazon VPC The SageMakernotebook references data contained in an Amazon S3 bucket in another account Thebucket is encrypted using SSE-KMS The instance returns an access denied error whentrying to access data in Amazon S3.Which of the following are required to access the bucket and avoid the access deniederror? (Select THREE)

A. An AWS KMS key policy that allows access to the customer master key (CMK)
B. A SageMaker notebook security group that allows access to Amazon S3
C. An 1AM role that allows access to the specific S3 bucket
D. A permissive S3 bucket policy
E. An S3 bucket owner that matches the notebook owner
F. A SegaMaker notebook subnet ACL that allow traffic to Amazon S3.

Question # 14

A machine learning (ML) engineer has created a feature repository in Amazon SageMakerFeature Store for the company. The company has AWS accounts for development,integration, and production. The company hosts a feature store in the developmentaccount. The company uses Amazon S3 buckets to store feature values offline. Thecompany wants to share features and to allow the integration account and the productionaccount to reuse the features that are in the feature repository. Which combination of steps will meet these requirements? (Select TWO.)

A. Create an IAM role in the development account that the integration account andproduction account can assume. Attach IAM policies to the role that allow access to thefeature repository and the S3 buckets.
B. Share the feature repository that is associated the S3 buckets from the developmentaccount to the integration account and the production account by using AWS ResourceAccess Manager (AWS RAM).
C. Use AWS Security Token Service (AWS STS) from the integration account and theproduction account to retrieve credentials for the development account.
D. Set up S3 replication between the development S3 buckets and the integration andproduction S3 buckets.
E. Create an AWS PrivateLink endpoint in the development account for SageMaker.

Question # 15

A network security vendor needs to ingest telemetry data from thousands of endpoints thatrun all over the world. The data is transmitted every 30 seconds in the form of records thatcontain 50 fields. Each record is up to 1 KB in size. The security vendor uses AmazonKinesis Data Streams to ingest the data. The vendor requires hourly summaries of therecords that Kinesis Data Streams ingests. The vendor will use Amazon Athena to querythe records and to generate the summaries. The Athena queries will target 7 to 12 of theavailable data fields.Which solution will meet these requirements with the LEAST amount of customization totransform and store the ingested data?

A. Use AWS Lambda to read and aggregate the data hourly. Transform the data and storeit in Amazon S3 by using Amazon Kinesis Data Firehose.
B. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using a short-lived Amazon EMR cluster.
C. Use Amazon Kinesis Data Analytics to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using Amazon Kinesis Data Firehose.
D. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transform the data and store it in Amazon S3 by using AWS Lambda.

Question # 16

A data scientist is building a linear regression model. The scientist inspects the dataset andnotices that the mode of the distribution is lower than the median, and the median is lowerthan the mean.Which data transformation will give the data scientist the ability to apply a linear regressionmodel?

A. Exponential transformation
B. Logarithmic transformation
C. Polynomial transformation
D. Sinusoidal transformation

Question # 17

A car company is developing a machine learning solution to detect whether a car is presentin an image. The image dataset consists of one million images. Each image in the datasetis 200 pixels in height by 200 pixels in width. Each image is labeled as either having a caror not having a car.Which architecture is MOST likely to produce a model that detects whether a car is presentin an image with the highest accuracy?

A. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a linear output layer that outputs the probability that an image contains a car.
B. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a softmax output layer that outputs the probability that an image contains a car.
C. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include alinear output layer that outputs the probability that an image contains a car.
D. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include asoftmax output layer that outputs the probability that an image contains a car.

Question # 18

A university wants to develop a targeted recruitment strategy to increase new studentenrollment. A data scientist gathers information about the academic performance history ofstudents. The data scientist wants to use the data to build student profiles. The universitywill use the profiles to direct resources to recruit students who are likely to enroll in theuniversity.Which combination of steps should the data scientist take to predict whether a particularstudent applicant is likely to enroll in the university? (Select TWO)

A. Use Amazon SageMaker Ground Truth to sort the data into two groups named"enrolled" or "not enrolled."
B. Use a forecasting algorithm to run predictions.
C. Use a regression algorithm to run predictions.
D. Use a classification algorithm to run predictions
E. Use the built-in Amazon SageMaker k-means algorithm to cluster the data into twogroups named "enrolled" or "not enrolled."

Question # 19

An insurance company developed a new experimental machine learning (ML) model toreplace an existing model that is in production. The company must validate the quality ofpredictions from the new experimental model in a production environment before thecompany uses the new experimental model to serve general user requests.Which one model can serve user requests at a time. The company must measure theperformance of the new experimental model without affecting the current live trafficWhich solution will meet these requirements?

A. A/B testing
B. Canary release
C. Shadow deployment
D. Blue/green deployment

Question # 20

A company wants to detect credit card fraud. The company has observed that an averageof 2% of credit card transactions are fraudulent. A data scientist trains a classifier on ayear's worth of credit card transaction data. The classifier needs to identify the fraudulenttransactions. The company wants to accurately capture as many fraudulent transactions aspossible.Which metrics should the data scientist use to optimize the classifier? (Select TWO.)

A. Specificity
B. False positive rate
C. Accuracy
D. Fl score
E. True positive rate

Question # 21

A company deployed a machine learning (ML) model on the company website to predictreal estate prices. Several months after deployment, an ML engineer notices that theaccuracy of the model has gradually decreased.The ML engineer needs to improve the accuracy of the model. The engineer also needs toreceive notifications for any future performance issues.Which solution will meet these requirements?

A. Perform incremental training to update the model. Activate Amazon SageMaker Model Monitor to detect model performance issues and to send notifications.
B. Use Amazon SageMaker Model Governance. Configure Model Governance toautomatically adjust model hyper para meters. Create a performance threshold alarm inAmazon CloudWatch to send notifications.
C. Use Amazon SageMaker Debugger with appropriate thresholds. Configure Debugger tosend Amazon CloudWatch alarms to alert the team Retrain the model by using only datafrom the previous several months.
D. Use only data from the previous several months to perform incremental training toupdate the model. Use Amazon SageMaker Model Monitor to detect model performanceissues and to send notifications.

Question # 22

A retail company wants to build a recommendation system for the company's website. Thesystem needs to provide recommendations for existing users and needs to base thoserecommendations on each user's past browsing history. The system also must filter out anyitems that the user previously purchased.Which solution will meet these requirements with the LEAST development effort?

A. Train a model by using a user-based collaborative filtering algorithm on AmazonSageMaker. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.
B. Use an Amazon Personalize PERSONALIZED_RANKING recipe to train a model.Create a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetPersonalizedRanking APIoperation to get the real-time recommendations.
C. Use an Amazon Personalize USER_ PERSONAL IZATION recipe to train a modelCreate a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetRecommendations API operationto get the real-time recommendations.
D. Train a neural collaborative filtering model on Amazon SageMaker by using GPU instances. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.

Question # 23

A machine learning (ML) specialist is using Amazon SageMaker hyperparameteroptimization (HPO) to improve a model’s accuracy. The learning rate parameter is specifiedin the following HPO configuration: During the results analysis, the ML specialist determines that most of the training jobs hada learning rate between 0.01 and 0.1. The best result had a learning rate of less than 0.01.Training jobs need to run regularly over a changing dataset. The ML specialist needs tofind a tuning mechanism that uses different learning rates more evenly from the providedrange between MinValue and MaxValue.Which solution provides the MOST accurate result?

A.Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this HPO job.
B.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue while using the same number of training jobs for each HPOjob:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.
C.Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this training job.
D.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue. Divide the number of training jobs for each HPO job by three:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.

Question # 24

A data engineer is preparing a dataset that a retail company will use to predict the numberof visitors to stores. The data engineer created an Amazon S3 bucket. The engineersubscribed the S3 bucket to an AWS Data Exchange data product for general economicindicators. The data engineer wants to join the economic indicator data to an existing tablein Amazon Athena to merge with the business data. All these transformations must finishrunning in 30-60 minutes.Which solution will meet these requirements MOST cost-effectively?

A. Configure the AWS Data Exchange product as a producer for an Amazon Kinesis datastream. Use an Amazon Kinesis Data Firehose delivery stream to transfer the data toAmazon S3 Run an AWS Glue job that will merge the existing business data with theAthena table. Write the result set back to Amazon S3.
B. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS Lambdafunction. Program the Lambda function to use Amazon SageMaker Data Wrangler tomerge the existing business data with the Athena table. Write the result set back toAmazon S3.
C. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS LambdaFunction Program the Lambda function to run an AWS Glue job that will merge the existingbusiness data with the Athena table Write the results back to Amazon S3.
D. Provision an Amazon Redshift cluster. Subscribe to the AWS Data Exchange productand use the product to create an Amazon Redshift Table Merge the data in AmazonRedshift. Write the results back to Amazon S3.

Question # 25

An online delivery company wants to choose the fastest courier for each delivery at themoment an order is placed. The company wants to implement this feature for existing usersand new users of its application. Data scientists have trained separate models withXGBoost for this purpose, and the models are stored in Amazon S3. There is one model fofeach city where the company operates.The engineers are hosting these models in Amazon EC2 for responding to the web clientrequests, with one instance for each model, but the instances have only a 5% utilization inCPU and memory, ....operation engineers want to avoid managing unnecessary resources.Which solution will enable the company to achieve its goal with the LEAST operationaloverhead?

A. Create an Amazon SageMaker notebook instance for pulling all the models fromAmazon S3 using the boto3 library. Remove the existing instances and use the notebook toperform a SageMaker batch transform for performing inferences offline for all the possibleusers in all the cities. Store the results in different files in Amazon S3. Point the web clientto the files.
B. Prepare an Amazon SageMaker Docker container based on the open-source multimodelserver. Remove the existing instances and create a multi-model endpoint inSageMaker instead, pointing to the S3 bucket containing all the models Invoke theendpoint from the web client at runtime, specifying the TargetModel parameter according tothe city of each request.
C. Keep only a single EC2 instance for hosting all the models. Install a model server in theinstance and load each model by pulling it from Amazon S3. Integrate the instance with theweb client using Amazon API Gateway for responding to the requests in real time,specifying the target resource according to the city of each request.
D. Prepare a Docker container based on the prebuilt images in Amazon SageMaker.Replace the existing instances with separate SageMaker endpoints. one for each citywhere the company operates. Invoke the endpoints from the web client, specifying the URL and EndpomtName parameter according to the city of each request.

Question # 26

A company is using Amazon Polly to translate plaintext documents to speech forautomated company announcements However company acronyms are beingmispronounced in the current documents How should a Machine Learning Specialistaddress this issue for future documents?

A. Convert current documents to SSML with pronunciation tags
B. Create an appropriate pronunciation lexicon.
C. Output speech marks to guide in pronunciation
D. Use Amazon Lex to preprocess the text files for pronunciation

Question # 27

A company wants to predict the classification of documents that are created from anapplication. New documents are saved to an Amazon S3 bucket every 3 seconds. Thecompany has developed three versions of a machine learning (ML) model within AmazonSageMaker to classify document text. The company wants to deploy these three versions to predict the classification of each document.Which approach will meet these requirements with the LEAST operational overhead?

A. Configure an S3 event notification that invokes an AWS Lambda function when newdocuments are created. Configure the Lambda function to create three SageMaker batchtransform jobs, one batch transform job for each model for each document.
B. Deploy all the models to a single SageMaker endpoint. Treat each model as aproduction variant. Configure an S3 event notification that invokes an AWS Lambdafunction when new documents are created. Configure the Lambda function to call eachproduction variant and return the results of each model.
C. Deploy each model to its own SageMaker endpoint Configure an S3 event notificationthat invokes an AWS Lambda function when new documents are created. Configure theLambda function to call each endpoint and return the results of each model.
D. Deploy each model to its own SageMaker endpoint. Create three AWS Lambdafunctions. Configure each Lambda function to call a different endpoint and return theresults. Configure three S3 event notifications to invoke the Lambda functions when newdocuments are created.

Question # 28

A company wants to create an artificial intelligence (Al) yoga instructor that can lead largeclasses of students. The company needs to create a feature that can accurately count thenumber of students who are in a class. The company also needs a feature that candifferentiate students who are performing a yoga stretch correctly from students who areperforming a stretch incorrectly....etermine whether students are performing a stretch correctly, the solution needs tomeasure the location and angle of each student's arms and legs A data scientist must useAmazon SageMaker to ...ss video footage of a yoga class by extracting image frames andapplying computer vision models.Which combination of models will meet these requirements with the LEAST effort? (SelectTWO.)

A. Image Classification
B. Optical Character Recognition (OCR)
C. Object Detection
D. Pose estimation
E. Image Generative Adversarial Networks (GANs)

Question # 29

A data scientist is working on a public sector project for an urban traffic system. Whilestudying the traffic patterns, it is clear to the data scientist that the traffic behavior at eachlight is correlated, subject to a small stochastic error term. The data scientist must modelthe traffic behavior to analyze the traffic patterns and reduce congestion.How will the data scientist MOST effectively model the problem?

A. The data scientist should obtain a correlated equilibrium policy by formulating thisproblem as a multi-agent reinforcement learning problem.
B. The data scientist should obtain the optimal equilibrium policy by formulating thisproblem as a single-agent reinforcement learning problem.
C. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using historical data through a supervised learning approach.
D. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using unlabeled simulated data representing the new trafficpatterns in the city and applying an unsupervised learning approach.

Question # 30

An ecommerce company wants to use machine learning (ML) to monitor fraudulenttransactions on its website. The company is using Amazon SageMaker to research, train,deploy, and monitor the ML models.The historical transactions data is in a .csv file that is stored in Amazon S3 The datacontains features such as the user's IP address, navigation time, average time on eachpage, and the number of clicks for ....session. There is no label in the data to indicate if atransaction is anomalous.Which models should the company use in combination to detect anomalous transactions?(Select TWO.)

A. IP Insights
B. K-nearest neighbors (k-NN)
C. Linear learner with a logistic function
D. Random Cut Forest (RCF)
E. XGBoost

Question # 31

A company wants to predict stock market price trends. The company stores stock marketdata each business day in Amazon S3 in Apache Parquet format. The company stores 20GB of data each day for each stock code.A data engineer must use Apache Spark to perform batch preprocessing datatransformations quickly so the company can complete prediction jobs before the stockmarket opens the next day. The company plans to track more stock market codes andneeds a way to scale the preprocessing data transformations.Which AWS service or feature will meet these requirements with the LEAST developmenteffort over time?

A. AWS Glue jobs
B. Amazon EMR cluster
C. Amazon Athena
D. AWS Lambda

Question # 32

A company wants to forecast the daily price of newly launched products based on 3 yearsof data for older product prices, sales, and rebates. The time-series data has irregulartimestamps and is missing some values.Data scientist must build a dataset to replace the missing values. The data scientist needsa solution that resamptes the data daily and exports the data for further modeling.Which solution will meet these requirements with the LEAST implementation effort?

A. Use Amazon EMR Serveriess with PySpark.
B. Use AWS Glue DataBrew.
C. Use Amazon SageMaker Studio Data Wrangler.
D. Use Amazon SageMaker Studio Notebook with Pandas.

Question # 33

A company operates large cranes at a busy port. The company plans to use machinelearning (ML) for predictive maintenance of the cranes to avoid unexpected breakdownsand to improve productivity.The company already uses sensor data from each crane to monitor the health of thecranes in real time. The sensor data includes rotation speed, tension, energy consumption,vibration, pressure, and …perature for each crane. The company contracts AWS MLexperts to implement an ML solution.Which potential findings would indicate that an ML-based solution is suitable for thisscenario? (Select TWO.)

A. The historical sensor data does not include a significant number of data points andattributes for certain time periods.
B. The historical sensor data shows that simple rule-based thresholds can predict cranefailures.
C. The historical sensor data contains failure data for only one type of crane model that isin operation and lacks failure data of most other types of crane that are in operation.
D. The historical sensor data from the cranes are available with high granularity for the last3 years.
E. The historical sensor data contains most common types of crane failures that thecompany wants to predict.

Question # 34

A company is creating an application to identify, count, and classify animal images that areuploaded to the company’s website. The company is using the Amazon SageMaker imageclassification algorithm with an ImageNetV2 convolutional neural network (CNN). Thesolution works well for most animal images but does not recognize many animal speciesthat are less common.The company obtains 10,000 labeled images of less common animal species and storesthe images in Amazon S3. A machine learning (ML) engineer needs to incorporate theimages into the model by using Pipe mode in SageMaker.Which combination of steps should the ML engineer take to train the model? (Choose two.)

A. Use a ResNet model. Initiate full training mode by initializing the network with randomweights.
B. Use an Inception model that is available with the SageMaker image classificationalgorithm.
C. Create a .lst file that contains a list of image files and corresponding class labels. Uploadthe .lst file to Amazon S3.
D. Initiate transfer learning. Train the model by using the images of less common species.
E. Use an augmented manifest file in JSON Lines format.

Question # 35

A machine learning (ML) specialist is using the Amazon SageMaker DeepAR forecastingalgorithm to train a model on CPU-based Amazon EC2 On-Demand instances. The modelcurrently takes multiple hours to train. The ML specialist wants to decrease the trainingtime of the model.Which approaches will meet this requirement7 (SELECT TWO )

A. Replace On-Demand Instances with Spot Instances
B. Configure model auto scaling dynamically to adjust the number of instancesautomatically.
C. Replace CPU-based EC2 instances with GPU-based EC2 instances.
D. Use multiple training instances.
E. Use a pre-trained version of the model. Run incremental training.

Question # 36

A manufacturing company has a production line with sensors that collect hundreds ofquality metrics. The company has stored sensor data and manual inspection results in adata lake for several months. To automate quality control, the machine learning team mustbuild an automated mechanism that determines whether the produced goods are goodquality, replacement market quality, or scrap quality based on the manual inspectionresults.Which modeling approach will deliver the MOST accurate prediction of product quality?

A. Amazon SageMaker DeepAR forecasting algorithm
B. Amazon SageMaker XGBoost algorithm
C. Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm
D. A convolutional neural network (CNN) and ResNet

Question # 37

A data scientist at a financial services company used Amazon SageMaker to train anddeploy a model that predicts loan defaults. The model analyzes new loan applications andpredicts the risk of loan default. To train the model, the data scientist manually extractedloan data from a database. The data scientist performed the model training anddeployment steps in a Jupyter notebook that is hosted on SageMaker Studio notebooks.The model's prediction accuracy is decreasing over time. Which combination of slept in theMOST operationally efficient way for the data scientist to maintain the model's accuracy?(Select TWO.)

A. Use SageMaker Pipelines to create an automated workflow that extracts fresh data,trains the model, and deploys a new version of the model.
B. Configure SageMaker Model Monitor with an accuracy threshold to check for model drift.Initiate an Amazon CloudWatch alarm when the threshold is exceeded. Connect theworkflow in SageMaker Pipelines with the CloudWatch alarm to automatically initiateretraining.
C. Store the model predictions in Amazon S3 Create a daily SageMaker Processing jobthat reads the predictions from Amazon S3, checks for changes in model predictionaccuracy, and sends an email notification if a significant change is detected.
D. Rerun the steps in the Jupyter notebook that is hosted on SageMaker Studio notebooksto retrain the model and redeploy a new version of the model.
E. Export the training and deployment code from the SageMaker Studio notebooks into aPython script. Package the script into an Amazon Elastic Container Service (Amazon ECS)task that an AWS Lambda function can initiate.

Question # 38

A data scientist uses Amazon SageMaker Data Wrangler to define and performtransformations and feature engineering on historical data. The data scientist saves thetransformations to SageMaker Feature Store.The historical data is periodically uploaded to an Amazon S3 bucket. The data scientistneeds to transform the new historic data and add it to the online feature store The datascientist needs to prepare the .....historic data for training and inference by using nativeintegrations.Which solution will meet these requirements with the LEAST development effort?

A. Use AWS Lambda to run a predefined SageMaker pipeline to perform thetransformations on each new dataset that arrives in the S3 bucket.
B. Run an AWS Step Functions step and a predefined SageMaker pipeline to perform thetransformations on each new dalaset that arrives in the S3 bucket
C. Use Apache Airflow to orchestrate a set of predefined transformations on each newdataset that arrives in the S3 bucket.
D. Configure Amazon EventBridge to run a predefined SageMaker pipeline to perform thetransformations when a new data is detected in the S3 bucket.

Question # 39

A financial services company wants to automate its loan approval process by building amachine learning (ML) model. Each loan data point contains credit history from a thirdpartydata source and demographic information about the customer. Each loan approvalprediction must come with a report that contains an explanation for why the customer wasapproved for a loan or was denied for a loan. The company will use Amazon SageMaker tobuild the model.Which solution will meet these requirements with the LEAST development effort?

A. Use SageMaker Model Debugger to automatically debug the predictions, generate theexplanation, and attach the explanation report.
B. Use AWS Lambda to provide feature importance and partial dependence plots. Use theplots to generate and attach the explanation report.
C. Use SageMaker Clarify to generate the explanation report. Attach the report to thepredicted results.
D. Use custom Amazon Cloud Watch metrics to generate the explanation report. Attach thereport to the predicted results.

Question # 40

A manufacturing company has structured and unstructured data stored in an Amazon S3bucket. A Machine Learning Specialist wants to use SQL to run queries on this data.Which solution requires the LEAST effort to be able to query this data?

A. Use AWS Data Pipeline to transform the data and Amazon RDS to run queries.
B. Use AWS Glue to catalogue the data and Amazon Athena to run queries.
C. Use AWS Batch to run ETL on the data and Amazon Aurora to run the queries.
D. Use AWS Lambda to transform the data and Amazon Kinesis Data Analytics to runqueries.

Question # 41

A data scientist has been running an Amazon SageMaker notebook instance for a fewweeks. During this time, a new version of Jupyter Notebook was released along withadditional software updates. The security team mandates that all running SageMakernotebook instances use the latest security and software updates provided by SageMaker.How can the data scientist meet these requirements?

A. Call the CreateNotebookInstanceLifecycleConfig API operation
B. Create a new SageMaker notebook instance and mount the Amazon Elastic Block Store(Amazon EBS) volume from the original instance
C. Stop and then restart the SageMaker notebook instance
D. Call the UpdateNotebookInstanceLifecycleConfig API operation

Question # 42

A large company has developed a B1 application that generates reports and dashboardsusing data collected from various operational metrics The company wants to provideexecutives with an enhanced experience so they can use natural language to get data fromthe reports The company wants the executives to be able ask questions using written andspoken interlacesWhich combination of services can be used to build this conversational interface? (SelectTHREE)

A. Alexa for Business
B. Amazon Connect
C. Amazon Lex
D. Amazon Poly
E. Amazon Comprehend
F. Amazon Transcribe

Question # 43

A manufacturing company needs to identify returned smartphones that have beendamaged by moisture. The company has an automated process that produces 2.000diagnostic values for each phone. The database contains more than five million phoneevaluations. The evaluation process is consistent, and there are no missing values in thedata. A machine learning (ML) specialist has trained an Amazon SageMaker linear learnerML model to classify phones as moisture damaged or not moisture damaged by using allavailable features. The model's F1 score is 0.6.What changes in model training would MOST likely improve the model's F1 score? (SelectTWO.)

A. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the SageMaker principal component analysis (PCA) algorithm.
B. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the scikit-iearn multi-dimensional scaling (MDS) algorithm.
C. Continue to use the SageMaker linear learner algorithm. Set the predictor type toregressor.
D. Use the SageMaker k-means algorithm with k of less than 1.000 to train the model
E. Use the SageMaker k-nearest neighbors (k-NN) algorithm. Set a dimension reductiontarget of less than 1,000 to train the model.

Question # 44

A beauty supply store wants to understand some characteristics of visitors to the store. Thestore has security video recordings from the past several years. The store wants togenerate a report of hourly visitors from the recordings. The report should group visitors byhair style and hair color.Which solution will meet these requirements with the LEAST amount of effort?

A. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an ResNet-50 algorithm to determine hair style and hair color.
B. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an XGBoost algorithm to determine hair style and hair color.
C. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an ResNet-50 algorithm to determine hair style and hair color.
D. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an XGBoost algorithm to determine hair style and hair.

Question # 45

Each morning, a data scientist at a rental car company creates insights about the previousday’s rental car reservation demands. The company needs to automate this process bystreaming the data to Amazon S3 in near real time. The solution must detect high-demandrental cars at each of the company’s locations. The solution also must create avisualization dashboard that automatically refreshes with the most recent data.Which solution will meet these requirements with the LEAST development time?

A. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize the data in QuickSight.
B. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
C. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
D. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize thedata in QuickSight.

Question # 46

A company wants to conduct targeted marketing to sell solar panels to homeowners. Thecompany wants to use machine learning (ML) technologies to identify which housesalready have solar panels. The company has collected 8,000 satellite images as training data and will use Amazon SageMaker Ground Truth to label the data.The company has a small internal team that is working on the project. The internal teamhas no ML expertise and no ML experience.Which solution will meet these requirements with the LEAST amount of effort from theinternal team?

A. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use AmazonRekognition Custom Labels for model training and hosting.
B. Set up a private workforce that consists of the internal team. Use the private workforceto label the data. Use Amazon Rekognition Custom Labels for model training and hosting.
C. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.
D. Set up a public workforce. Use the public workforce to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.

Question # 47

A finance company needs to forecast the price of a commodity. The company has compileda dataset of historical daily prices. A data scientist must train various forecasting models on80% of the dataset and must validate the efficacy of those models on the remaining 20% ofthe dataset.What should the data scientist split the dataset into a training dataset and a validationdataset to compare model performance?

A. Pick a date so that 80% to the data points precede the date Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
B. Pick a date so that 80% of the data points occur after the date. Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
C. Starting from the earliest date in the dataset. pick eight data points for the trainingdataset and two data points for the validation dataset. Repeat this stratified sampling untilno data points remain.
D. Sample data points randomly without replacement so that 80% of the data points are inthe training dataset. Assign all the remaining data points to the validation dataset.

Question # 48

A chemical company has developed several machine learning (ML) solutions to identifychemical process abnormalities. The time series values of independent variables and thelabels are available for the past 2 years and are sufficient to accurately model the problem.The regular operation label is marked as 0. The abnormal operation label is marked as 1 .Process abnormalities have a significant negative effect on the companys profits. Thecompany must avoid these abnormalities.Which metrics will indicate an ML solution that will provide the GREATEST probability ofdetecting an abnormality?

A. Precision = 0.91Recall = 0.6
B. Precision = 0.61Recall = 0.98
C. Precision = 0.7Recall = 0.9
D. Precision = 0.98Recall = 0.8

Question # 49

A machine learning (ML) specialist uploads 5 TB of data to an Amazon SageMaker Studioenvironment. The ML specialist performs initial data cleansing. Before the ML specialistbegins to train a model, the ML specialist needs to create and view an analysis report thatdetails potential bias in the uploaded data.Which combination of actions will meet these requirements with the LEAST operationaloverhead? (Choose two.)

A. Use SageMaker Clarify to automatically detect data bias
B. Turn on the bias detection option in SageMaker Ground Truth to automatically analyzedata features.
C. Use SageMaker Model Monitor to generate a bias drift report.
D. Configure SageMaker Data Wrangler to generate a bias report.
E. Use SageMaker Experiments to perform a data check

Question # 50

A company uses sensors on devices such as motor engines and factory machines tomeasure parameters, temperature and pressure. The company wants to use the sensordata to predict equipment malfunctions and reduce services outages.The Machine learning (ML) specialist needs to gather the sensors data to train a model topredict device malfunctions The ML spoctafst must ensure that the data does not containoutliers before training the ..el.What can the ML specialist meet these requirements with the LEAST operationaloverhead?

A. Load the data into an Amazon SagcMaker Studio notebook. Calculate the first and thirdquartile Use a SageMaker Data Wrangler data (low to remove only values that are outside of those quartiles.
B. Use an Amazon SageMaker Data Wrangler bias report to find outliers in the dataset Usea Data Wrangler data flow to remove outliers based on the bias report.
C. Use an Amazon SageMaker Data Wrangler anomaly detection visualization to findoutliers in the dataset. Add a transformation to a Data Wrangler data flow to removeoutliers.
D. Use Amazon Lookout for Equipment to find and remove outliers from the dataset.

Question # 51

A data scientist wants to use Amazon Forecast to build a forecasting model for inventorydemand for a retail company. The company has provided a dataset of historic inventorydemand for its products as a .csv file stored in an Amazon S3 bucket. The table belowshows a sample of the dataset. How should the data scientist transform the data?

A. Use ETL jobs in AWS Glue to separate the dataset into a target time series dataset andan item metadata dataset. Upload both datasets as .csv files to Amazon S3.
B. Use a Jupyter notebook in Amazon SageMaker to separate the dataset into a relatedtime series dataset and an item metadata dataset. Upload both datasets as tables inAmazon Aurora.
C. Use AWS Batch jobs to separate the dataset into a target time series dataset, a relatedtime series dataset, and an item metadata dataset. Upload them directly to Forecast from alocal machine.
D. Use a Jupyter notebook in Amazon SageMaker to transform the data into the optimizedprotobuf recordIO format. Upload the dataset in this format to Amazon S3.

Question # 52

The chief editor for a product catalog wants the research and development team to build amachine learning system that can be used to detect whether or not individuals in acollection of images are wearing the company's retail brand. The team has a set of trainingdata.Which machine learning algorithm should the researchers use that BEST meets theirrequirements?

A. Latent Dirichlet Allocation (LDA)
B. Recurrent neural network (RNN)
C. K-means
D. Convolutional neural network (CNN)

Question # 53

A wildlife research company has a set of images of lions and cheetahs. The companycreated a dataset of the images. The company labeled each image with a binary label thatindicates whether an image contains a lion or cheetah. The company wants to train amodel to identify whether new images contain a lion or cheetah..... Dh Amazon SageMaker algorithm will meet this requirement?

A. XGBoost
B. Image Classification - TensorFlow
C. Object Detection - TensorFlow
D. Semantic segmentation - MXNet

Question # 54

A company’s data scientist has trained a new machine learning model that performs betteron test data than the company’s existing model performs in the production environment.The data scientist wants to replace the existing model that runs on an Amazon SageMakerendpoint in the production environment. However, the company is concerned that the newmodel might not work well on the production environment data.The data scientist needs to perform A/B testing in the production environment to evaluatewhether the new model performs well on production environment data.Which combination of steps must the data scientist take to perform the A/B testing?(Choose two.)

A. Create a new endpoint configuration that includes a production variant for each of thetwo models.
B. Create a new endpoint configuration that includes two target variants that point todifferent endpoints.
C. Deploy the new model to the existing endpoint.
D. Update the existing endpoint to activate the new model.
E. Update the existing endpoint to use the new endpoint configuration.

Question # 55

A data science team is working with a tabular dataset that the team stores in Amazon S3.The team wants to experiment with different feature transformations such as categoricalfeature encoding. Then the team wants to visualize the resulting distribution of the dataset.After the team finds an appropriate set of feature transformations, the team wants toautomate the workflow for feature transformations.Which solution will meet these requirements with the MOST operational efficiency?

A. Use Amazon SageMaker Data Wrangler preconfigured transformations to explorefeature transformations. Use SageMaker Data Wrangler templates for visualization. Exportthe feature processing workflow to a SageMaker pipeline for automation.
B. Use an Amazon SageMaker notebook instance to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
C. Use AWS Glue Studio with custom code to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
D. Use Amazon SageMaker Data Wrangler preconfigured transformations to experimentwith different feature transformations. Save the transformations to Amazon S3. UseAmazon QuickSight for visualzation. Package each feature transformation step into aseparate AWS Lambda function. Use AWS Step Functions for workflow automation.

Question # 56

A Machine Learning Specialist is training a model to identify the make and model ofvehicles in images The Specialist wants to use transfer learning and an existing modeltrained on images of general objects The Specialist collated a large custom dataset ofpictures containing different vehicle makes and models.What should the Specialist do to initialize the model to re-train it with the custom data?

A. Initialize the model with random weights in all layers including the last fully connectedlayer
B. Initialize the model with pre-trained weights in all layers and replace the last fullyconnected layer.
C. Initialize the model with random weights in all layers and replace the last fully connectedlayer
D. Initialize the model with pre-trained weights in all layers including the last fully connectedlayer