Glossary A

Glossary A

Absolute grading system

A grading system that assigns a fixed value to each level of achievement, without taking into account individual circumstances or progress.

  • Criterion-referenced vs Norm-referenced Evaluation (Assessment Types)
  • Academic standards

    Guidelines or benchmarks that define the knowledge, skills, and abilities that students should acquire at each grade level or in a specific subject area.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Accountability

    The responsibility for one’s actions or decisions, and the willingness to accept consequences for them.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Individual vs Group Evaluation (Assessing Performance)
  • Long-term vs Short-term Training Goals (Setting Objectives)
  • Qualitative vs Quantitative Objectives (Setting Outcomes)
  • Accountability measures

    Establishing clear accountability measures for pursuing professional development goals.

  • Training Needs: Current vs Future (Strategic Evaluation)
  • Accountability measures for individuals and teams

    Establishing clear accountability measures for individuals and teams in pursuing professional development goals.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Accreditation

    The process of evaluating and certifying that an educational institution or program meets certain standards and requirements.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Accuracy

    The degree to which data or information is correct or precise.

  • 360-degree Feedback vs Self-evaluation (Sources of Input)
  • Criterion-referenced vs Norm-referenced Evaluation (Assessment Types)
  • Accuracy and reliability

    Ensuring that professional development efforts are conducted with accuracy and reliability.

  • Custom vs Standard Evaluation Forms (Choosing Tools)
  • Accuracy measures

    Standards or criteria used to evaluate the accuracy and correctness of an individual’s work or performance.

  • Intrinsic vs Extrinsic Evaluation Metrics (Capturing Value)
  • Accuracy of results

    Ensuring that the results of professional development efforts are accurate and reliable.

  • Custom vs Standard Evaluation Forms (Choosing Tools)
  • Achievability

    The likelihood that professional development goals can be achieved.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Achievable targets

    Realistic and attainable goals that can be reached with effort and dedication.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Achieved results

    The outcomes or accomplishments that have been achieved through a program or organization’s efforts.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Achievement

    The level of success or accomplishment in reaching a specific goal or objective.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Realistic vs Stretch Objectives (Setting Goals)
  • Achieving goals

    Successfully reaching professional development goals.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Actionable insights

    Insights or information that can be used to inform decision-making or take action to improve performance or outcomes.

  • Formative vs Summative Evaluation (Distinguishing Training Outcomes)
  • Actionable steps

    Specific and measurable steps that can be taken to achieve professional development goals.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Action-oriented goals

    Goals that are specific, measurable, achievable, relevant, and time-bound, and that require concrete actions to achieve.

  • Behavioral vs Cognitive Objectives (Training Focus)
  • Action planning

    The process of developing a detailed plan of action to achieve a specific goal or objective.

  • Reaction vs Learning Evaluation (Levels of Assessment)
  • Training Evaluation: Quantitative vs Qualitative Data (Choosing Methods)
  • Active recall

    A learning technique that involves actively recalling information from memory to aid in retention and learning.

  • Training Effectiveness vs Efficiency (Understanding Impact)
  • Adaptability

    The ability to adjust and modify professional development goals as circumstances change.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Adaptability to change

    The ability to adjust and respond effectively to new or changing situations, environments, or demands.

  • Long-term vs Short-term Training Goals (Setting Objectives)
  • Adaptive learning technology

    Technology that uses algorithms and data to personalize and optimize the learning experience for individuals.

  • Training Effectiveness vs Efficiency (Understanding Impact)
  • Adherence to ethical principles in evaluations

    Ensuring that professional development goals are evaluated in a fair and ethical manner.

  • In-house vs External Evaluation (Choosing Evaluators)
  • Adherence to relevant laws, regulations, and policies

    Ensuring that professional development goals are pursued in accordance with applicable laws, regulations, and policies.

  • In-house vs External Evaluation (Choosing Evaluators)
  • Adult learning principles

    The principles and strategies that are effective in facilitating learning among adult learners, such as relevance, self-directedness, and experience-based learning.

  • Training Transfer vs Training Retention (Achieving Outcomes)
  • Affective skills

    The ability to recognize, understand, and manage one’s own emotions, as well as the emotions of others.

  • Performance Metrics vs Learning Metrics (Measurement in Training)
  • Alignment

    The process of ensuring that all aspects of a program or organization are consistent with its goals and objectives.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Alignment with learning objectives and standards

    Ensuring that professional development goals are in line with established learning objectives and standards.

  • Criterion-referenced vs Norm-referenced Evaluation (Assessment Types)
  • Alignment with organizational goals and objectives

    Ensuring that professional development goals are in line with the specific goals and objectives of the organization.

  • In-house vs External Evaluation (Choosing Evaluators)
  • Alignment with overall organizational strategy

    Ensuring that professional development goals are in line with the broader goals and objectives of the organization.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Alternative hypothesis

    A statement that proposes a different explanation or prediction than the null hypothesis in a statistical test.

  • Evaluation Design: Experimental vs Non-experimental (Research Methods)
  • Alternatives

    Options or choices available to address a problem or situation.

  • 360-degree Feedback vs Self-evaluation (Sources of Input)
  • Ambitious objectives

    Challenging and high-reaching goals that require significant effort and dedication to achieve.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Analysis techniques

    Methods used to analyze and interpret data, such as statistical analysis or qualitative analysis.

  • Baseline vs Follow-up Evaluation (Measurement Timing)
  • Analytical thinking

    The ability to break down complex problems or situations into smaller components in order to understand and solve them.

  • Behavioral vs Cognitive Objectives (Training Focus)
  • Analytic scoring method

    A method of evaluating performance that breaks down complex tasks into smaller components for evaluation.

  • Criterion-referenced vs Norm-referenced Evaluation (Assessment Types)
  • Anchoring and adjustment heuristic

    A cognitive bias where individuals rely too heavily on an initial piece of information when making decisions.

  • 360-degree Feedback vs Self-evaluation (Sources of Input)
  • ANOVA (Analysis of Variance)

    A statistical method used to compare means across multiple groups or conditions.

  • Training Evaluation: Quantitative vs Qualitative Data (Choosing Methods)
  • Aptitude criteria

    Standards or criteria used to evaluate an individual’s natural abilities or potential for learning and development.

  • Knowledge vs Skill Training Objectives (Setting Goals)
  • Assessment

    The process of gathering and analyzing information to evaluate an individual’s knowledge, skills, or abilities.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Formal vs Informal Training Evaluation (Choosing Approach)
  • Assessment accuracy

    The degree to which an assessment measures what it is intended to measure, without error or distortion.

  • Self-assessment vs External Evaluation (Validity in Training)
  • Assessment bias

    The presence of systematic errors or inaccuracies in an assessment that can unfairly advantage or disadvantage certain individuals or groups.

  • Self-assessment vs External Evaluation (Validity in Training)
  • Assessment Criteria

    The standards or benchmarks used to evaluate performance.

  • Custom vs Standard Evaluation Forms (Choosing Tools)
  • Knowledge vs Skill Training Objectives (Setting Goals)
  • Assessment data

    Information collected through various methods to evaluate the knowledge, skills, and abilities of students or educators.

  • Learning Outcomes: Expected vs Achieved (Comparing Results)
  • Assessment levels

    The different levels of assessment, such as formative, summative, or diagnostic, which serve different purposes in evaluating learning.

  • Reaction vs Learning Evaluation (Levels of Assessment)
  • Assessment methods

    The techniques or procedures used to gather data for an assessment, such as tests, surveys, or observations.

  • Formative vs Summative Evaluation (Distinguishing Training Outcomes)
  • Assessment reliability

    The degree to which an assessment produces consistent and accurate results over time and across different evaluators.

  • Self-assessment vs External Evaluation (Validity in Training)
  • Assessment timing

    The point in time when an assessment is administered, which can impact the validity and reliability of the results.

  • Pre-training vs Post-training Assessments (Timing Importance)
  • Assessment tools

    Instruments or methods used to evaluate an individual’s knowledge, skills, or abilities.

  • Training Effectiveness vs Efficiency (Understanding Impact)
  • Formative vs Summative Evaluation (Distinguishing Training Outcomes)
  • Benchmarks vs Targets in Training (Setting Standards)
  • Reaction vs Learning Evaluation (Levels of Assessment)
  • Knowledge vs Skill Training Objectives (Setting Goals)
  • Performance Metrics vs Learning Metrics (Measurement in Training)
  • Self-assessment vs External Evaluation (Validity in Training)
  • Training Evaluation: Quantitative vs Qualitative Data (Choosing Methods)
  • Attainability

    The likelihood of achieving a goal or objective.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Attainable aims

    Goals or objectives that are realistic and achievable.

  • Realistic vs Stretch Objectives (Setting Goals)
  • Attentional processes

    The cognitive mechanisms involved in selecting and focusing on specific stimuli while ignoring others.

  • Behavioral vs Cognitive Objectives (Training Focus)
  • Attitude change

    The process of altering one’s beliefs, feelings, or behaviors towards a particular subject or situation.

  • Behavioral vs Cognitive Objectives (Training Focus)
  • Attitude surveys

    Surveys designed to measure people’s opinions, beliefs, and attitudes towards a particular topic or issue.

  • Performance Metrics vs Learning Metrics (Measurement in Training)
  • Authentic assessment

    An assessment that accurately measures an individual’s skills and abilities.

  • Criterion-referenced vs Norm-referenced Evaluation (Assessment Types)
  • Performance Metrics vs Learning Metrics (Measurement in Training)
  • Reaction vs Learning Evaluation (Levels of Assessment)
  • Availability of internal resources

    The availability of resources within an organization to support professional development.

  • In-house vs External Evaluation (Choosing Evaluators)