1z0-1110-22練習テスト問題は更新された58問題あります [Q31-Q51]

Share

1z0-1110-22練習テスト問題は更新された58問題あります

Oracle 1z0-1110-22問題集で一発合格できる問題を試そう!


Oracle 1z0-1110-22試験は60の多肢選択問題から構成され、105分以内に完了する必要があります。試験はオンラインまたはOracleテストセンターで受験することができます。試験に合格すると、グローバルに認められたOracle Cloud Infrastructure Data Science 2022 Professionalの認定を受けることができ、雇用主や業界のプロフェッショナルから高い評価を受けます。

 

質問 # 31
You have a complex Python code project that could benefit from using Data Science Jobs as it is a repeatable machine learning model training task. The project contains many subfolder and classes. What is the best way to run this project as a job?

  • A. Rewrite your code so that a single executable Python or Bash/Shell script file.
  • B. ZIP the entire code project folder and upload it as a Job artifact on job creation, Jobs identities the main executable file automatically.
  • C. ZIP the entire code project folder, upload it as a Job artifact on job creation and set JOB_RUN_ENTRYPOINT to point to the main executable file.
  • D. ZIP the entire code project folder and upload it as a Job artifact Jobs automatically identifies

正解:C

解説:
That main top level where the code is run.


質問 # 32
You have an embarrassingly parallel or distributed batch job on a large amount of data running using Data Science Jobs What would be the best approach to run the workload?

  • A. Create the job in Data Science Jobs and start a job run. When it is done, start a new job run until you achieve the number of runs required.
  • B. Create a new job for every job run that you have to run in parallel, because the Date Science Jobs service can have only one job run per job.
  • C. Create the job in Data Science Jobs and then start the number of simultaneous job runs required for your workload.
  • D. Reconfigure the job run because Data science jobs does not support embarrassingly parallel.

正解:C


質問 # 33
When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

  • A. Configure the deployment infrastructure.
  • B. Define the compute scaling strategy.
  • C. Define the inference server dependencies.
  • D. Execute the inference logic code

正解:A


質問 # 34
You have a data set with fewer than 1000 observations, and you are using Oracle AutoML to build a classifier. While visualizing the results of each stage of the Oracle AutoML pipeline, you notice that no visualization has been generated for one of the stages. Which stage is not visualized?

  • A. Feature selection
  • B. Algorithm selection
  • C. Hyperparameter tuning
  • D. Adaptive sampling

正解:D


質問 # 35
You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a model and need some additional python libraries for processing genome sequencing dat a. Which of the following THREE statements are correct with respect to installing additional Python libraries to process the data?

  • A. You can only install libraries using yum and pip as a normal user
  • B. OCI Data Science allows privileges in notebook sessions.
  • C. You can install private or custom libraries from your own internal repositories
  • D. You can install any open source package available in a publicly accessible Python Package Index (PyPI) repository
  • E. You cannot install a library that's not preinstalled in the provided image

正解:A、C、D


質問 # 36
You are using a third-party Continuous Integration/Continuous Delivery (CI/CD) tool to create a pipeline for preparing and training models. How would you integrate a third-party tool outside Oracle Cloud Infrastructure (OCI) to access Data Science Jobs?

  • A. Third-party tools use authentication keys to create and run.
  • B. Data Science Jobs Data Science Jobs is not accessible from outside OCI.
  • C. Data Science Jobs does not accept code from third-party tools, therefore you need to run the pipeline externally.
  • D. Third-party software can access Data Science Jobs by using any of the OCI Software Development Kits (SDKs).

正解:D


質問 # 37
As you are working in your notebook session, you find that your notebook session does not have enough compute CPU and memory for your workload. How would you scale up your notebook session without losing your work?

  • A. Deactivate your notebook session, provision a new notebook session on larger compute shape, and re-create all your file changes.
  • B. Down your files and data to your local machine, delete your notebook session, provision tebook session on a larger compute shape, and upload your files from your local the new notebook session.
  • C. Ensure your files and environments are written to the block volume storage under the /home/datascience directory, deactivate the notebook session, and activate the notebook larger compute shape selected.
  • D. Create a temporary bucket in Object Storage, write all your files and data to Object Storage, delete tur ctebook session, provision a new notebook session on a larger com-pute shape, and capy your flies and data from your temporary bucket onto your new notebook session.

正解:C


質問 # 38
As a data scientist, you are tasked with creating a model training job that is expected to take different hyperparameter values on every run. What is the most efficient way to set those pa-rameters with Oracle Data Science Jobs?

  • A. Create a new job every time you need to run your code and pass the parameters as en-vironment variables.
  • B. Create a new no by setting the required parameters in your code, and create a new job for mery code change.
  • C. Create your code to expect different parameters either as environment variables or as command line arguments, which are set on every job run with different values.
  • D. Create your code to expect different parameters as command line arguments, and create it new job every time you run the code.

正解:C


質問 # 39
You have developed a model training code that regularly checks for new data in Object Storage and retrains the model. Which statement best describes the Oracle Cloud Infrastructure (OCI) services that can be accessed from Data Science Jobs?

  • A. Some OCI services require authorizations not supported by Data Science Jobs.
  • B. Data Science Jobs cannot access all OCI services.
  • C. Data Science Jobs can access OCI resources only via the resource principal.
  • D. Data Science Jobs can access all OCI services.

正解:D


質問 # 40
You are a data scientist using Oracle AutoML to produce a model and you are evaluating the score metric for the model. Which of the following TWO prevailing metrics would you use for evaluating multiclass classification model?

  • A. F1 Score
  • B. Recall
  • C. Explained variance score
  • D. Mean squared error
  • E. R-Squared

正解:A、B


質問 # 41
You want to evaluate the relationship between feature values and model predictions. You sus-pect that some of the features are correlated. Which model explanation technique would you recommend?

  • A. Feature Dependence Explanations.
  • B. Local Interpretable Model-Agnostic Explanations.
  • C. Accumulated Local Effects.
  • D. Feature Permutation Importance Explanations.

正解:B


質問 # 42
You are given the task of writing a program that sorts document images by language. Which Oracle AI service would you use?

  • A. OCI Language
  • B. OCI Vision
  • C. Oracle Digital Assistant
  • D. OCI Speech

正解:B


質問 # 43
Six months ago, you created and deployed a model that predicts customer churn for a call center. Initially, it was yielding quality predictions. However, over the last two months, users have been questioning the credibility of the predictions. Which TWO methods customer churn would you employ to verify the accuracy of the model?

  • A. Validate the model using recent data
  • B. Redeploy the model
  • C. Operational monitoring
  • D. Retrain the model
  • E. Drift monitoring

正解:A、D


質問 # 44
You are a data scientist with a set of text and image files that need annotation, and you want to use Oracle Cloud Infrastructure (OCI) Data Labeling. Which of the following THREE an-notation classes are supported by the tool.?

  • A. Polygonal Segmentation
  • B. Key-Point and Landmark
  • C. Named Entity Extraction
  • D. Classification (single/multi label)
  • E. Object Detection
  • F. Semantic Segmentation

正解:C、D、E


質問 # 45
You have just received a new data set from a colleague. You want to quickly find out summary information about the data set, such as the types of features, total number of observations, and data distributions, Which Accelerated Data Science (ADS) SDK method from the AD&Dataset class would you use?

  • A. Compute{}
  • B. Show_in_notebook{}
  • C. To_xgb{}
  • D. Show_corr{}

正解:B


質問 # 46
You have created a model, and you want to use the Accelerated Data Science (ADS) SDK to deploy this model. Where can you save the artifacts to deploy this model with ADS?

  • A. Model Catalog
  • B. OCI Vault
  • C. Model Depository
  • D. Data Science Artifactory

正解:A


質問 # 47
While reviewing your data, you discover that your data set has a class imbalance. You are aware that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation tools for data set transformation. Which would be the right tool to correct any imbalance between the classes?

  • A. sample()
  • B. visualize_transforms()
  • C. suggeste_recoomendations()
  • D. auto_transform()

正解:D


質問 # 48
You have built a machine model to predict whether a bank customer is going to default on a loan. You want to use Local Interpretable Model-Agnostic Explanations (LIME) to understand a specific prediction. What is the key idea behind LIME?

  • A. Model-agnostic techniques are more interpretable than techniques that are dependent on the types of models.
  • B. Global and local behaviors of machine learning models are similar.
  • C. Global behavior of a machine learning model may be complex, while the local behavior may be approximated with a simpler surrogate model.
  • D. Local explanation techniques are model agnostic, while global explanation techniques are not.

正解:C


質問 # 49
You have trained three different models on your data set using Oracle AutoML. You want to visualize the behavior of each of the models, including the baseline model, on the test set. Which class should be used from the Accelerated Data Science (ADS) SDK to visually compare the models?

  • A. ADSTuner
  • B. ADS Evaluator
  • C. EvaluationMetrics
  • D. ADS Explainer

正解:C


質問 # 50
You are working as a data scientist for a healthcare company. They decide to analyze the data to find patterns in a large volume of electronic medical records. You are asked to build a PySpark solution to analyze these records in a JupyterLab notebook. What is the order of recommended steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

  • A. Install a spark conda environment. Configure core-site.xml. Launch a notebook session: Create a Data Flow application with the Accelerated Data Science (ADS) SOK. Develop your PySpark application
  • B. Launch a notebook session. Install a PySpark conda environment. Configure coresite. xml.
  • C. Launch a notebook session. Configure core-site.xml. Install a PySPark conda environ-ment.
  • D. Configure core-site.xml. Install a PySPark conda environment. Create a Data Flow application with the Accelerated Data Science (ADS) SDK Develop your PySpark ap-plication. Launch a notebook session.
  • E. Develop your PySpark application Create a Data Flow application with the Ac-celerated Data Science (ADS) SOK
  • F. Develop your PySpark application. Create a Data Flow application with the Ac-celerated Data science (ADS) SDK.

正解:F


質問 # 51
......

Oracle 1z0-1110-22試験問題集で[2024年最新] 練習有効な試験問題集解答:https://www.goshiken.com/Oracle/1z0-1110-22-mondaishu.html