ad
ad

Azure AI Fundamentals Certification Question & Answer 13 - AI 900

Education


Introduction

Returning a bounding box that indicates the location of a vehicle in an image is a clear example of object detection. This capability specifically identifies objects within an image and displays their positions often using bounding boxes.

To interpret user input, such as "call me back later," an AI service known as Language Understanding (LUIS) can be utilized. This service is designed to interpret the meaning and intent behind user inputs, helping to understand the context and the user's intended actions derived from natural language.

An AI solution that enables users to control smart devices via verbal commands relies on two types of Natural Language Processing (NLP) workloads: Speech to Text and Text to Speech. These components work together, converting spoken commands into text for interpretation and then back into speech for user feedback.

However, if your NLP model was created with data obtained without permission, it violates the principle of Privacy and Security in Microsoft's responsible AI framework. This principle underscores the necessity of ethical data handling and respect for individuals' privacy.

In the realm of computer vision, a traffic monitoring system that collects vehicle registration numbers from CCTV footage exemplifies text detection. This technology focuses specifically on identifying and extracting text from images or video frames.

When planning a responsible generative AI solution, the correct sequence of stages to follow is: Identify, Measure, Mitigate, and Operate. Each stage addresses the potential impacts and risks, assesses effectiveness, reduces identified risks, and involves the deployment and management of the AI solution with ongoing performance monitoring.

For training a model, the appropriate module in Azure Machine Learning designer to create a training dataset and validation dataset from an existing dataset is the Split Data module. This module allows for effective model training and validation by specifying the proportion of data for each subset.

Contrary to a common misconception, Azure OpenAI image generation models are capable of generating images not only from prompt text but also from other inputs, such as images or additional parameters that guide the generation process. This adds flexibility and creativity beyond just textual prompts.

Lastly, if you are developing an image tagging solution for social media to automatically tag friends in images, you should use the Face API (PHAS) service from Azure Cognitive Services. This service specializes in recognizing and identifying human faces in images, facilitating automatic tagging.

Keywords

FAQ

Q1: What is object detection?
A1: Object detection is a computer vision capability that identifies objects within an image and provides their locations using bounding boxes.

Q2: Which AI service interprets user intent from natural language input?
A2: The Language Understanding (LUIS) service interprets the meaning and intent of user inputs.

Q3: What components allow voice-controlled smart devices to function?
A3: Speech to Text and Text to Speech are the two NLP workloads that enable voice commands.

Q4: What principle does using unconsented data violate?
A4: It violates the principle of Privacy and Security.

Q5: What technology does a traffic monitoring system use to collect vehicle registration numbers?
A5: It employs text detection technology to identify and extract text from images.

Q6: What is the correct sequence for planning a responsible generative AI solution?
A6: Identify, Measure, Mitigate, and Operate.

Q7: Which module in Azure ML designer helps create training and validation datasets?
A7: The Split Data module.

Q8: Can Azure OpenAI models generate images from inputs other than text?
A8: Yes, they can generate images from images and additional parameters as well.

Q9: What Azure service should be used for automatic image tagging?
A9: The Face API (PHAS) service from Azure Cognitive Services is designed for recognizing and tagging faces automatically.