- How to Use These Questions
- Domain 1: Foundational Concepts (10 Questions)
- Domain 2: AI Development Lifecycle (10 Questions)
- Domain 3: Responsible AI (10 Questions)
- Domain 4: Risk Management (10 Questions)
- Domain 5: Governance & Regulation (10 Questions)
- Interpret Your Score
- Next Steps
- Frequently Asked Questions
The best way to prepare for the AIGP exam is to practice with realistic questions. This collection includes 50 scenario-based questions modeled after actual exam content, covering all five domains with detailed explanations for every answer.
Use these questions to assess your current knowledge, identify weak areas, and build the pattern recognition skills you'll need on exam day.
How to Use These Practice Questions
Aim for 75%+ correct (38/50) on these practice questions before scheduling your exam. This buffer accounts for the fact that actual exam questions may differ from practice materials. If you're scoring below 70%, spend more time studying before attempting the real exam.
Why other options are incorrect:
⢠B) Unsupervised learning works with unlabeled data, finding patterns without known outcomes.
⢠C) Reinforcement learning learns through trial and error with rewards/penalties, not from labeled historical data.
⢠D) Transfer learning is a technique for applying knowledge from one task to another, not a fundamental learning type.
Key governance implication: In regulated industries like financial services, the inability to explain AI decisions can violate regulatory requirements (e.g., GDPR Article 22's right to explanation, fair lending laws requiring reason codes).
Why other options are incorrect:
⢠A) Overfitting would cause poor generalization to new data, but the scenario shows high accuracy.
⢠C) Insufficient data typically causes poor performance, not explainability issues.
⢠D) Concept drift occurs post-deployment when data distributions change; not the issue described.
Common NLP applications include: Text classification, sentiment analysis, named entity recognition, machine translation, chatbots, and text summarization.
Why other options are incorrect:
⢠A) Computer vision processes images and video, not text.
⢠B) Robotics involves physical machines and movement.
⢠D) Reinforcement learning is a learning paradigm, not a specific application area.
Governance implications: For customer service, hallucinations could provide incorrect information about products, policies, or legal rights. This creates liability, reputational, and compliance risks that require specific controls (human review, grounding techniques, output verification).
Why other options are less correct:
While A, B, and C are legitimate concerns for many AI systems, they're not specific to generative AI. Hallucination is the unique challenge that distinguishes generative AI governance.
This illustrates a critical governance principle: AI systems can only be as good as their training data. Non-representative training data leads to discriminatory outcomes, even when no one intended discrimination. This is why data governanceâensuring diverse, representative training dataâis fundamental to AI governance.
Why other options are incorrect:
⢠A) Overfitting would show poor performance on ANY new data, not specifically one demographic group.
⢠C) Equipment quality is unlikely given the systematic pattern (affecting one demographic group consistently).
⢠D) Architecture issues would affect all predictions equally, not show demographic disparities.
Governance implication: Deep learning's automatic feature learning makes it powerful but also makes it harder to understand what features the model is actually usingâcontributing to the "black box" problem.
Why other options are incorrect:
⢠A) Both can use labeled or unlabeled data depending on the task.
⢠B) Deep learning is used across domains (NLP, audio, tabular data).
⢠D) Traditional ML can outperform deep learning, especially with limited data.
Key distinction: "Without predefined categories" signals unsupervised learning. If categories already existed (e.g., "high-value," "at-risk," "new"), you'd use supervised classification.
Why other options are incorrect:
⢠A) Supervised classification requires labeled categories to train on.
⢠C) Reinforcement learning learns through actions and rewards, not pattern discovery.
⢠D) Supervised regression predicts continuous values, not group membership.
Context matters: Different AI applications have different primary risks. For quality control (not involving people), the main concern is operational reliability, not bias or privacy.
Why other options are less appropriate:
⢠A) Quality control systems inspect products, not employeesâno biometric data involved.
⢠B) Discrimination against protected groups isn't relevant when inspecting manufactured items.
⢠C) Manufacturing quality control doesn't typically involve financial services regulations.
Governance implications:
⢠You may not know what data the base model was trained on
⢠Due diligence on third-party models is essential
⢠Testing for inherited biases must be part of deployment criteria
Why other options are incorrect:
⢠A) Transfer learning typically requires LESS compute than training from scratch.
⢠B) Pre-trained models CAN be adapted; that's the purpose of fine-tuning.
⢠D) Transfer learning often IMPROVES performance, especially with limited domain data.
Governance considerations for data labeling:
⢠Labeler consistency and quality control
⢠Labeler bias affecting ground truth
⢠Clear labeling guidelines and training
⢠Privacy of data being labeled
Why other options are incorrect:
⢠B) Feature engineering is creating input variables from raw data, not labeling outputs.
⢠C) Model inference is generating predictions from a trained model.
⢠D) Hyperparameter tuning is optimizing model configuration settings.
Key governance requirement: Ongoing monitoring for drift is essential. Models should not be deployed and forgottenâthey need continuous performance evaluation and periodic retraining.
Why other options are incorrect:
⢠A) The scenario states the model code hasn't changed, ruling out software bugs.
⢠C) Poor design would cause problems from the start, not emerge after 18 months.
⢠D) Resource issues would affect speed/availability, not accuracy patterns.
Governance value: Model cards promote transparency and informed decision-making. They help downstream users understand whether a model is appropriate for their use case and what risks to consider.
The concept was introduced by Mitchell et al. (Google) in 2019 and has become a best practice in responsible AI development.
⢠80% of qualified male applicants are recommended for interviews
⢠80% of qualified female applicants are recommended for interviews
⢠However, 60% of total male applicants are recommended vs. 40% of total female applicants
Demographic parity requires equal overall selection rates regardless of qualifications. The 60% vs 40% overall rates violate this criterion.
Key insight: Different fairness definitions can conflict. This scenario illustrates why choosing the appropriate fairness metric depends on context and valuesâthere's no single "correct" definition of fairness that works for all situations.
Key GOVERN activities include:
⢠Establishing AI governance policies and procedures
⢠Defining roles, responsibilities, and accountability
⢠Setting risk tolerance thresholds
⢠Building organizational risk culture
⢠Ensuring resources for risk management
Remember the four functions:
⢠GOVERN: Policies, roles, culture (organizational foundation)
⢠MAP: Context, stakeholders, impacts (understanding the AI system)
⢠MEASURE: Metrics, testing, monitoring (quantifying risks)
⢠MANAGE: Respond, prioritize, improve (taking action)
Other prohibited AI practices include:
⢠Exploiting vulnerabilities of specific groups (age, disability)
⢠Social scoring by public authorities
⢠Real-time remote biometric identification in public spaces (with limited exceptions)
⢠Emotion recognition in workplace and education (with exceptions)
Why other options are not prohibited:
⢠A) Credit scoring is HIGH-RISK, not prohibited
⢠C) Recruitment AI is HIGH-RISK, not prohibited
⢠D) Chatbots have LIMITED RISK transparency requirements only
High-risk AI in law enforcement/justice includes:
⢠AI assisting judges in researching and interpreting facts and law
⢠AI assisting in applying the law to concrete facts
⢠AI for crime prediction (individual risk assessment)
⢠AI for parole and probation decisions
Why not prohibited? The Act distinguishes between assistance (high-risk) and fully automated decisions affecting fundamental rights without human oversight (which could be prohibited depending on implementation).
This page includes representative questions from each domain. For the complete 50-question set with detailed explanations across all domains, plus hundreds more practice questions, access our full question bank through the main practice quiz section.
Interpret Your Score
| Score Range | Interpretation | Recommendation |
|---|---|---|
| 80-100% (40-50 correct) | Excellentâlikely ready to test | Schedule your exam; continue light review |
| 70-79% (35-39 correct) | Good foundationâsome gaps remain | Focus on weak domains; more practice needed |
| 60-69% (30-34 correct) | Borderlineâsignificant study needed | Do not schedule exam yet; intensive study required |
| Below 60% (< 30 correct) | Not readyâfundamental gaps | Full study program needed before retesting |
Analyze Your Domain Performance
Beyond overall score, track which domains you struggled with:
- Weak in Domain 1? Review AI/ML fundamentalsâtechnical concepts through a governance lens
- Weak in Domain 2? Study the AI development lifecycle and governance checkpoints at each stage
- Weak in Domain 3? Focus on responsible AI principles, fairness definitions, and organizational implementation
- Weak in Domain 4? Deep dive into NIST AI RMFâknow all four functions and their activities
- Weak in Domain 5? Study EU AI Act classifications and requirements extensively
Next Steps
Based on your performance on these questions:
- Review all explanationsâeven for questions you got right
- Identify your weakest domain(s) and allocate extra study time there
- Practice more questionsâaim for 300+ total before the exam
- Retake these questions after additional study to measure improvement
- Schedule your exam when consistently scoring 75%+
Frequently Asked Questions
Free AIGP practice questions are available from IAPP (for members), this guide's 50 questions, and various exam prep platforms offer free trials. For comprehensive preparation with hundreds of questions, consider a dedicated question bank that tracks your performance by domain.
We recommend completing at least 300-500 practice questions before your exam. This volume helps build pattern recognition for exam-style questions and ensures you've seen the breadth of topics covered. Quality matters tooâfocus on understanding explanations, not just memorizing answers.
These questions are modeled after the AIGP exam format and content based on the Body of Knowledge, but are not actual exam questions. The real exam may include different scenarios and question styles. Use these for learning and assessment, but don't assume exam questions will be identical.
Aim for 75%+ on practice questions before scheduling your exam. This buffer accounts for exam-day nerves and differences between practice and real questions. If you're consistently scoring 70% or below, spend more time studying before attempting the exam.
Ready for More Practice?
Access our complete question bank with 500+ questions, detailed explanations, and performance tracking across all domains.