DP-200試験無料問題集「Microsoft Implementing an Azure Data Solution 認定」
You are building an Azure Stream Analytics query that will receive input data from Azure IoT Hub and write the results to Azure Blob storage.
You need to calculate the difference in readings per sensor per hour.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You need to calculate the difference in readings per sensor per hour.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

正解:

Reference:
https://docs.microsoft.com/en-us/stream-analytics-query/lag-azure-stream-analytics
Use the following login credentials as needed:
Azure Username: xxxxx
Azure Password: xxxxx
The following information is for technical support purposes only:
Lab Instance: 10277521
You plan to create large data sets on db2.
You need to ensure that missing indexes are created automatically by Azure in db2. The solution must apply ONLY to db2.
To complete this task, sign in to the Azure portal.
Azure Username: xxxxx
Azure Password: xxxxx
The following information is for technical support purposes only:
Lab Instance: 10277521
You plan to create large data sets on db2.
You need to ensure that missing indexes are created automatically by Azure in db2. The solution must apply ONLY to db2.
To complete this task, sign in to the Azure portal.
正解:
1. To enable automatic tuning on Azure SQL Database logical server, navigate to the server in Azure portal and then select Automatic tuning in the menu.

2. Select database db2
3. Click the Apply button
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning-enable

2. Select database db2
3. Click the Apply button
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning-enable
You need to implement event processing by using Stream Analytics to produce consistent JSON documents.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You need to ensure that the missing indexes for REPORTINGDB are added.
What should you use?
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You need to ensure that the missing indexes for REPORTINGDB are added.
What should you use?
正解:B
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You need to ensure polling data security requirements are met.
Which security technologies should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Which security technologies should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

正解:

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql
You have an alert on a SQL pool in Azure Synapse that uses the signal logic shown in the exhibit.

On the same day, failures occur at the following times:
08:01
08:03
08:04
08:06
08:11
08:16
08:19
The evaluation period starts on the hour.
At which times will alert notifications be sent?

On the same day, failures occur at the following times:
08:01
08:03
08:04
08:06
08:11
08:16
08:19
The evaluation period starts on the hour.
At which times will alert notifications be sent?
正解:D
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You have a SQL pool in Azure Synapse that contains a table named dbo.Customers. The table contains 9 column name Email.
You need to prevent nonadministrative users from seeing the full email addresses in the Email column. The users must see values in a format of [email protected] instead.
What should you do?
You need to prevent nonadministrative users from seeing the full email addresses in the Email column. The users must see values in a format of [email protected] instead.
What should you do?
正解:C
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.
You have a file in Blob storage named LocationIncomes that contains based on location. The file rarely changes.
You need to use an address to look up a median income based on location. You must output the data to Azure SQL Database for immediate use and to Azure Data Lake Storage Gen2 for long-term retention.
Solution: You implement a Stream Analytics job that has one streaming input, one query, and two outputs.
Does this meet the goal?
After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.
You have a file in Blob storage named LocationIncomes that contains based on location. The file rarely changes.
You need to use an address to look up a median income based on location. You must output the data to Azure SQL Database for immediate use and to Azure Data Lake Storage Gen2 for long-term retention.
Solution: You implement a Stream Analytics job that has one streaming input, one query, and two outputs.
Does this meet the goal?
正解:A
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You are a data engineer. You are designing a Hadoop Distributed File System (HDFS) architecture. You plan to use Microsoft Azure Data Lake as a data storage repository.
You must provision the repository with a resilient data schem
a. You need to ensure the resiliency of the Azure Data Lake Storage. What should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You must provision the repository with a resilient data schem
a. You need to ensure the resiliency of the Azure Data Lake Storage. What should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

正解:

References:
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes
Your company manages a payroll application for its customers worldwide. The application uses an Azure SQL database named DB1. The database contains a table named Employee and an identity column named EmployeeId.
A customer requests the EmployeeId be treated as sensitive data.
Whenever a user queries EmployeeId, you need to return a random value between 1 and 10 instead of the EmployeeId value.
Which masking format should you use?
A customer requests the EmployeeId be treated as sensitive data.
Whenever a user queries EmployeeId, you need to return a random value between 1 and 10 instead of the EmployeeId value.
Which masking format should you use?
正解:C
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
A company plans to use Platform-as-a-Service (PaaS) to create the new data pipeline process. The process must meet the following requirements.
Ingest:
* Access multiple data sources
* Provide the ability to orchestrate workflow
* Provide the capability to run SQL Server Integration Services packages.
Store:
* Optimize storage for big data workloads.
* Provide encryption of data at rest.
* Operate with no size limits.
Prepare and Train:
* Provide a fully-managed and interactive workspace for exploration and visualization.
* Provide the ability to program in R, SQL, Python, Scala, and Java.
* Provide seamless user authentication with Azure Active Directory.
Model & Serve:
* Implement native columnar storage.
* Support for the SQL language
* Provide support for structured streaming.
You need to build the data integration pipeline.
Which technologies should you use? To answer, select the appropriate options in the answer area.

Ingest:
* Access multiple data sources
* Provide the ability to orchestrate workflow
* Provide the capability to run SQL Server Integration Services packages.
Store:
* Optimize storage for big data workloads.
* Provide encryption of data at rest.
* Operate with no size limits.
Prepare and Train:
* Provide a fully-managed and interactive workspace for exploration and visualization.
* Provide the ability to program in R, SQL, Python, Scala, and Java.
* Provide seamless user authentication with Azure Active Directory.
Model & Serve:
* Implement native columnar storage.
* Support for the SQL language
* Provide support for structured streaming.
You need to build the data integration pipeline.
Which technologies should you use? To answer, select the appropriate options in the answer area.

正解:

References:
https://docs.microsoft.com/bs-latn-ba/azure/architecture/data-guide/technology-choices/pipeline-orchestration-data-movement
https://docs.microsoft.com/en-us/azure/azure-databricks/what-is-azure-databricks
You are implementing an Azure Blob storage account for an application that has the following requirements:
Data created during the last 12 months must be readily accessible.
Blobs older than 24 months must use the lowest storage costs. This data will be accessed infrequently.
Data created 12 to 24 months ago will be accessed infrequently but must be readily accessible at the lowest storage costs.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Data created during the last 12 months must be readily accessible.
Blobs older than 24 months must use the lowest storage costs. This data will be accessed infrequently.
Data created 12 to 24 months ago will be accessed infrequently but must be readily accessible at the lowest storage costs.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

正解:

1 - Create a block blob in a Blob storage account
2 - Use an Azure Resource Manager template that has a lifecycle management policy
3 - Create a rule that has the rule actions of TierCool, TierToArchive, and Delete
4 - Schedule the lifecycle management policy to run.
References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts
A company uses Azure SQL Database to store sales transaction dat
a. Field sales employees need an offline copy of the database that includes last year's sales on their laptops when there is no internet connection available.
You need to create the offline export copy.
Which three options can you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
a. Field sales employees need an offline copy of the database that includes last year's sales on their laptops when there is no internet connection available.
You need to create the offline export copy.
Which three options can you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
正解:A,B,E
解答を投票する