ACA-BigData1試験無料問題集「Alibaba Cloud ACA Big Data Certification 認定」

Resource is a particular concept of MaxCompute. If you want to use user-defined function UDF or MapReduce, resource is needed. For example: After you have prepared UDF, you must upload the compiled jar package to MaxCompute as resource. Which of the following objects are MaxCompute resources? (Number of correct answers: 4) Score 2

正解:A,C,D,E 解答を投票する
Tom is the administrator of a project prj1 in MaxCompute. The project involves a large volume of sensitive data such as user IDs and shopping records, and many data mining algorithms with proprietary intellectual property rights. Tom wants to properly protect these sensitive data and algorithms. To be specific, project users can only access the data within the project, all data flows only within the project.
What operation should he perform?
Score 2

You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to perform these actions?
Score 2

In order to ensure smooth processing of tasks in the Dataworks data development kit, you must create an AccessKey. An AccessKey is primarily used for access permission verification between various Alibaba Cloud products. The AccessKey has two parts, they are ____. (Number of correct answers: 2) Score 2

MaxCompute is a fast and fully-managed TB/PB-level data warehousing solution provided by Alibaba Cloud. Which of the following product features are correct? ______ (Number of correct answers: 3) Score 2

正解:A,B,E 解答を投票する
Which of the following task types does DataWorks support?
(Number of correct answers: 4)

正解:A,B,C,D 解答を投票する
Scenario: Jack is the administrator of project prj1. The project involves a large volume of sensitive data such as bank account, medical record, etc. Jack wants to properly protect the data. Which of the follow statements is necessary?

You are working on a project where you need to chain together MapReduce, Hive jobs.
You also need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to perform these actions?