When you are ready to test and become an Informatica Certified Professional (ICP), please follow these steps :
Skill Set Inventory |
||
The skill set inventory is used to guide your preparation before taking the test. It is an outline of the technical topics and subject areas that are covered in each test. The skill set inventory includes test domain weighting, test objectives and topical content. The topics and concepts are included to clarify the test objectives.
Test takers will be tested on:
The test domains and the extent to which they are represented as an estimated percentage of the test follows:
Title |
% of Test |
Big Data Integration Course Introduction & Big Data Basics | 6% |
Informatica on Hadoop Architecture | 4% |
Data Warehouse Offloading & Code Migration and Ingestion | 6% |
Informatica Polyglot Computing in Hadoop | 3% |
Monitoring, Logs, and Troubleshooting | 3% |
Hadoop Data Integration Challenges and Performance Tuning | 3% |
Complex File Parsing | 4% |
NoSQL Databases | 3% |
Developer Tool Fundamentals | 3% |
Developing Physical Data Objects | 4% |
Viewing Data | 4% |
Developing Mappings and Transformations | 4% |
Working with Dynamic Schema and Dynamic Mappings | 4% |
Deploying Applications | 4% |
Parameters | 4% |
Workflows | 6% |
Edge Data Streaming (EDS) Overview | 3% |
Big Data Streaming Overview | 1% |
Kafka Overview | 3% |
Streaming Mappings | 16% |
Monitoring Logs and Troubleshooting | 3% |
Performance Tuning and Best Practices | 4% |
End-to-End Use Case | 3% |
You may select from one or more response offerings to answer a question.
You will score the question correctly if your response accurately completes the statement or answers the question. Incorrect distractors are given as possible correct answers so that those without the required skills and experience may wrongly select that choice.
A passing grade of 70% is needed to achieve recognition as an Informatica Certified Professional (ICP) as a Data Engineering 10.2 Developer, Certified Professional.
You are given 90 minutes to complete the test. Formats used in this test are:
The test will contain 70 questions comprised of topics that span across the sections listed below. In order to ensure that you are prepared for the test, review the subtopics with each section.
Big Data Basics
Informatica on Hadoop Architecture
Data Warehouse Offloading
Code Migration and Ingestion
Informatica Polyglot Computing in Hadoop
Monitoring, Logs, and Troubleshooting
Hadoop Data Integration Challenges and Performance Tuning
Complex File Parsing
Edge Data Streaming (EDS) Overview
Big Data Streaming Overview
Kafka Overview
NoSQL Databases
Fundamentals
Developing Physical Data Objects
Viewing Data
Developing Mappings and Transformations
Working with Dynamic Schema and Dynamic Mappings
Deploying Applications
Parameters
Workflows
Streaming Mappings
Monitoring Logs and Troubleshooting
Performance Tuning and Best Practices
End-to-End Use Case
BDM connects to these technologies in a cluster: | ||
A. | Correct | HDFS |
B. | Correct | Hive |
C. | X | SQL |
D. | Correct | Yarn |
E. | X | Spark |
NoSQL refers to what type of database? | ||
A. | X | non-SQL database |
B. | X | non-relational database |
C. | X | not-only SQL database |
D. | Correct | All of the Above |
Scorecarding is performed on which of the following? | ||
A. | X | True |
B. | Correct | False |
Which of the following is TRUE about Edge Data Streaming? | ||
A. | Correct | You configure the data flow in the Administrator tool. |
B. | X | You configure the data flow in the Developer tool. |
C. | X | You deploy the data flow configuration to the Data Integration Service. |
D. | X | You must choose a Data Integration Service in the Informatica Monitor. |
The Developer Tool is a thin client and no installation is necessary. | ||
A. | X | True |
B. | Correct | False |
Retake Policy: Current purchases of the test will include one second-attempt if a student does not pass a test. Any additional retakes are charged the current fee at the time of purchase. Promotions are excluded and cannot be combined. You must wait two weeks after a failed test to take the test again.
Informatica University has a community page so students can assist one another in their test preparation within the Informatica Network: https://network.informatica.com/welcome
|
|
|
---|
The test domains and the extent to which they are represented as an estimated percentage of the test follows:
Title |
% of Test |
Big Data Integration Course Introduction & Big Data Basics | 6% |
Informatica on Hadoop Architecture | 4% |
Data Warehouse Offloading & Code Migration and Ingestion | 6% |
Informatica Polyglot Computing in Hadoop | 3% |
Monitoring, Logs, and Troubleshooting | 3% |
Hadoop Data Integration Challenges and Performance Tuning | 3% |
Complex File Parsing | 4% |
NoSQL Databases | 3% |
Developer Tool Fundamentals | 3% |
Developing Physical Data Objects | 4% |
Viewing Data | 4% |
Developing Mappings and Transformations | 4% |
Working with Dynamic Schema and Dynamic Mappings | 4% |
Deploying Applications | 4% |
Parameters | 4% |
Workflows | 6% |
Edge Data Streaming (EDS) Overview | 3% |
Big Data Streaming Overview | 1% |
Kafka Overview | 3% |
Streaming Mappings | 16% |
Monitoring Logs and Troubleshooting | 3% |
Performance Tuning and Best Practices | 4% |
End-to-End Use Case | 3% |
You may select from one or more response offerings to answer a question.
You will score the question correctly if your response accurately completes the statement or answers the question. Incorrect distractors are given as possible correct answers so that those without the required skills and experience may wrongly select that choice.
A passing grade of 70% is needed to achieve recognition as an Informatica Certified Professional (ICP) in Data Engineering Developer.
You are given 90 minutes to complete the test. Formats used in this test are:
The test will contain 70 questions comprised of topics that span across the sections listed below. In order to ensure that you are prepared for the test, review the subtopics with each section.
Big Data Basics
Informatica on Hadoop Architecture
Data Warehouse Offloading
Code Migration and Ingestion
Informatica Polyglot Computing in Hadoop
Monitoring, Logs, and Troubleshooting
Hadoop Data Integration Challenges and Performance Tuning
Complex File Parsing
Edge Data Streaming (EDS) Overview
Big Data Streaming Overview
Kafka Overview
|
NoSQL Databases
Fundamentals
Developing Physical Data Objects
Viewing Data
Developing Mappings and Transformations
Working with Dynamic Schema and Dynamic Mappings
Deploying Applications
Parameters
Workflows
Streaming Mappings
Monitoring Logs and Troubleshooting
Performance Tuning and Best Practices
End-to-End Use Case
|
---|
BDM connects to these technologies in a cluster: | ||
A. | Correct | HDFS |
B. | Correct | Hive |
C. | X | SQL |
D. | Correct | Yarn |
E. | X | Spark |
NoSQL refers to what type of database? | ||
A. | X | non-SQL database |
B. | X | non-relational database |
C. | X | not-only SQL database |
D. | Correct | All of the Above |
Scorecarding is performed on which of the following? | ||
A. | X | True |
B. | Correct | False |
Which of the following is TRUE about Edge Data Streaming? | ||
A. | Correct | You configure the data flow in the Administrator tool. |
B. | X | You configure the data flow in the Developer tool. |
C. | X | You deploy the data flow configuration to the Data Integration Service. |
D. | X | You must choose a Data Integration Service in the Informatica Monitor. |
The Developer Tool is a thin client and no installation is necessary. | ||
A. | X | True |
B. | Correct | False |
Make sure you are familiar with our certification program guidelines BEFORE registering and taking the certification exam.