Skip to main content
State Universities Civil Service System

Procedure Manuals
Details


Title 1.3 Classification Specification and Examination Development Process
Manual A. Classification Plan Management
Subsection 1. Classification Specification and Examination Maintenance
Status Revised 5/16/2012
  1. General Outline
    1. Review of current and other related class specifications/examinations and other resources
      1. University System Office review
      2. External occupational research
      3. Subject matter expert review
      4. Examination analysis
      5. Test item analysis
    2. Job Analysis
      1. New Classifications
      2. Current/Existing Classifications
      3. Electronic presentation (E-Test)
      4. Special-case job analysis procedures
    3. Evaluation of Job Analysis
      1. Identify work tasks/duties, skills required, establish importance and frequency, set minimum qualifications
      2. Additional occupational research
    4. Class Specification and Examination Preparation
      1. Create Class Specification
      2. Create Examination Instrument Using Skill Set Matches
    5. Class Specification and Examination Review Meeting
      1. Meeting scheduled
      2. Review draft class specification changes
      3. Review draft examination materials
      4. Modify and set effective date
    6. Examination Pre-Testing
      1. Pre-testing of examination components
      2. Statistical analysis of pre-test results
      3. Collection of additional information from incumbent/subject matter experts
      4. Item seeding
      5. Security and distribution of pre-test results
    7. Validity
      1. Content validity established by incumbent pre-test examination scores/passing rate.
      2. Criterion validity established by successful completion of the probationary period.
      3. Additional validity measures may be established by correlating test scores with objective or subjective indicators of job performance.
      4. Small samples and meaningful index reviewed.
    8. Security and Confidentiality in the Examination Development Process
      1. Obligations under section 37 of the State Universities Civil Service Act (110 ILCS 70/37)
      2. Instructions for handling examination documents and materials
      3. Consequences for security violations
  2. Summary of Classification/Examination Development Process

    The Executive Director or designee within the University System Office will evaluate formally submitted proposals to assess the credibility of the criteria cited as justification on the submitted request. Formally submitted proposals may be returned for additional information or rejected.

    There are many reasons to justify a proposed change to the classification plan, including but not limited to, the following:

    • routine occupational changes
    • adverse impact issues
    • business or operational changes
    • reaction to previous proposals
    • specific changes or evolution of job classification duties
    • technology changes related to overall position function
    • discontinuation of specified job activities

    Accordingly, University System Office staff may utilize the following analytical steps, as necessary, in their review of all proposed classification plan modifications, regardless of origination source (initiated by an employer, an employee through the State Universities Civil Service Advisory Committee, other designated advisory groups, union representatives, or the University System Office).

    1. Review of current and other related classification specifications/examinations and other resources
      1. The University System Office will review current class specification/examination, considering the date of the last review and the format of the specification/examination.
      2. The University System Office will research external resources related to similar job classifications and appropriate occupational areas to review the latest occupational trends and specific job or job group content.
      3. Various subject matter experts will be enlisted to provide direct occupational background information and begin analysis of examination instrument.
      4. The examination instrument review will include the following:
        1. A review of theA review of the skill set link and question pools used in the current examination.
        2. Update/addition of questions to incorporate new occupational trends/technology.
        3. Verification of accuracy of answers.
        4. Identification of problem questions or questions that are likely to be challenged.
      5. Each test item will be analyzed using classical reliability theory and, where appropriate, item response theory (IRT). Classical reliability statistics become mostly stable with samples of 50-60. IRT-based analyses are only appropriate when the number of test-takers is much greater, with minimum samples of about 200 test-takers. IRT postulates a function (item response function, or IRF) relating the probability of a correct response for an item to an underlying level of ability, thus making considerably stronger assumptions about the data, necessitating more power to estimate the model. Therefore, the following statistics are most reliable with samples greater than 30. With smaller samples, subjective item difficulty ratings will be collected at pre-testing (see Section 1.3(b)(6)(C)(2)).
        1. Classical reliability statistics
          1. Mean: Proportion of test-takers who correctly answer the item. This is an indicator of item difficulty.
          2. Corrected Item-Total Correlation (CITC): The correlation between the item responses and the total test score (minus the studied item). This is an indicator of how well the item measures the characteristic assessed by the test.
          3. Cronbach’s Alpha: This is a measure of internal consistency reliability. In general, high values are desirable (in general, .80 minimum, .90 preferred). This means that the items on the test “hang together” well, or have high item inter-correlations. Alpha is a lower bound for the true reliability of the test under reasonable assumptions.
          4. In cases where Cronbach’s alpha is not the most appropriate index of reliability, other reliability evidence may be used (i.e., test-retest, alternate forms, etc.; cf. Traub, 1994).
        2. Item response theory statistics
          1. a-parameter: Item discrimination: This indicates how well the item discriminates between test takers of differing levels of ability. Related to the CITC.
          2. b-parameter: Item difficulty: Higher b’s are more difficult items, meaning that test-takers have to have a higher level of ability to have a high probability of answering correctly. Related to the item mean.
          3. c-parameter: Pseudo-guessing parameter. This is the lower asymptote of the IRF, indicating the probability that a test-taker with extremely low ability will answer the item correctly.
          4. Information: Information is the IRT analogue of reliability. It is a function of the item parameters and ability. It is additive, such that the information function for a test is equal to the sum of the information functions for the individual items. For a large enough number of items, test information is approximately the reciprocal of the standard error of the ability-estimate. Therefore, conditional standard errors of measurement can be calculated at all levels of ability, allowing the precision of measurement to be differentially assessed across the ability continuum. Additionally, information can be used to build tests, by incorporating items so that the sum of their information functions closely matches a target information function.
        3. Items with undesirable statistical properties will be eliminated or revised. Undesirable properties are generally defined to be item means above .90 or below .10 and CITC below .20. These rules-of-thumb may be modified in specific cases.
    2. Job Analysis
      1. New Classifications
        1. When proposing to add a new classification, the University System Office will review the proposed classification specifications, and any related position descriptions. This information will be compared to other similar occupational jobs in the current classification system. Additional occupational research will be conducted using appropriate resources such as the Occupational Information Network (O*NET).
        2. Based on this research, the University System Office will develop and administer an appropriate job analysis survey as applicable, such as the Computerized Job Analysis Survey Instrument (C-JASI), to subject matter experts in order to determine the most appropriate duties and functions to be contained in the new classification, along with the knowledge, skills and abilities (KSAs), necessary to successful perform those duties and functions. This will assist in clarifying the new position specifications and identify the necessary skill set elements for the examination.
        3. Job analysis surveys, or C-JASI, will be administered through a secure website and the results will be stored on a secure server. All information collected will be securely stored and maintained.
      2. Current/Existing Classifications
        1. When updating or revising existing classification, the University System Office will begin with a review of current position descriptions and the proposed new classification specifications. This information will be compared to other similar occupational jobs in the current classification system. Additional occupational research will be conducted using appropriate resources such as the Occupational Information Network (O*NET).
        2. Based on this research, the University System Office may develop and administer C-JASI to current incumbents, supervisors, and departmental administrators to evaluate the congruence of the proposed specification and the work actually being performed. Other analytical procedures, such as a the collection and review of job descriptions, may also be utilized. This information will be used to establish the relationship between the current position duties and responsibilities under review and the proposed specification, along with the KSAs required to perform those duties.
        3. C-JASI will be administered through a secure website and the results will be stored on a secure server. All information collected and statistical analysis will be securely stored and maintained.
      3. Limited job analysis techniques will typically be conducted when paper-based exams are simply being converted to an electronic delivery format (E-Test). This process may include the simple steps of confirming with supervisors and administrators that duties for the classification have remained intact and unchanged.
      4. In special cases, other job analysis techniques may be used, such as onsite focus group interviews with job incumbents and/or supervisors, direct observation of incumbents performing work tasks, critical incidents studies, among other techniques (cf. Gatewood & Feild, 2001). Copies of all materials and information collected will be securely stored and retained.
    3. Evaluation of Job Analysis
      1. C-JASI will be used to identify work tasks and duties currently performed by employees in the designated classification, as well as the importance and frequency of these tasks. For classifications where a knowledge test may be used, participants will also indicate what skills are necessary to perform each task.
        1. Statistics reported are demographic information for the surveyed groups, mean importance/frequency ratings for tasks in the overall sample and by group, and mean importance rating for KSAs in the overall sample and by groups, where applicable. Percent endorsements for specific educational and work experience are also reported.
        2. The results will provide an empirical linkage between duties performed on the job and examination items. Linkage is established by tying specific job tasks/duties to specific KSAs. Items may then be written to assess the KSAs needed to perform the job.
        3. In some cases, items will be written to assess performance directly for specific duties for the classification, rather than KSAs needed to perform those duties.
        4. Information regarding minimum qualifications will also be obtained to update the class specification, as well as guide the development of credential assessments, when applicable. These minimum qualifications are based on subject matter expert endorsement of specific educational and work experience backgrounds needed for the job and/or specific credentials needed to perform the job.
      2. Additional research, via the Internet or other information sources, may be conducted.
    4. Class Specification and Examination Preparation
      1. Class Specification
        1. Based on results of the review of job descriptions and/or C-JASI data, and/or in conjunction with the acceptance of the Request to Develop or Revise Class Specifications/Examinations, a draft class specification will be prepared, outlining the general function, specific duties/responsibilities and minimum acceptable qualifications.
        2. The proposed class specification will be distributed to employers for their review and comments, prior to the Class Specification and Examination Review Meeting.
      2. Examination
        1. To ensure validity, a draft examination will be prepared, based on the statistical analysis of the position descriptions and/or C-JASI. This will include a review/analysis of work tasks and duties currently performed by employees in the designated classification, the importance and frequency of these tasks, and the KSAs required to perform those tasks. Examination instruments may include one or more of the following components:
          1. performance/aptitude questions
          2. essay/written questions
          3. review/rating of credentials (education/experience and license/certificates)
          4. skills measurement, such as a keyboarding test
          5. physical ability assessment
          6. conscientiousness assessment
          7. personality characteristic assessment
          8. oral interview and presentation
        2. In certain test environments, new test items may be “seeded”. Seeded it items are those items that are pre-tested in live test forms. Seeded items are not scored and do not count towards or against the final test score. Test-takers are blind to which items are seeded and which are live. Seeded items are properly analyzed prior to their active use in any test environment.
        3. In certain test environments, test item subject content pools will be established and categorized based on established analytical procedures. Each examination administered will equally draw from the appropriate test item pools to establish a consistent distribution and reliability across all examinations given in any one classification. Test items will randomly be presented when possible. Correct answer designations for each test item will also be randomly presented when possible.
    5. Class Specification and Examination Review Meeting
      1. Upon completion of the research and analysis as described above, along with a draft of the new or revised specifications and/or examination instrument, a review meeting will be scheduled.  In most instances, University System Office staff, Designate Employer Representatives/ Human Resources, subject matter experts, and Union Representatives will be notified and asked to participate.
      2. Participants will be asked to review the draft class specifications.
      3. Participants will be asked to review the draft examination materials.
      4. If necessary, modifications will be made to draft documents with final specifications/examination instruments adopted. At this time, the effective date of implementation will usually be determined.
      5. Prior to the finalization of the class specification/examination process, a secondary review by each employer shall determine whether the proposals will affect employees who are members of bargaining units and shall officially inform appropriate union officials of the proposal. DERs shall certify to the University System Office that union officials have been informed of the proposal as a part of their written comments. Comments received as a result of the proposal shall be reviewed by the Executive Director or designee, who may approve, disapprove, or return the proposal to its originator for resolution of issues raised.

      Note: Once the review meeting is conducted and all information collected, only basic editorial corrections to the class specification and/or examination will be considered.

    6. Examination Pre-Testing
      1. In most instances, the proposed new examination instrument will be pre-tested, using current incumbents in the classification. This will typically be done via the E-Test system.
      2. The pre-test results will be statistically analyzed to establish content and criterion validity.
        1. Items with undesirable measurement properties will be eliminated.
        2. It is expected that incumbents will score more highly on average than applicants (due to range restriction), so that the definition of undesirable item statistics (from Section 1.3(b)(1)(E)(3)) will be different. It is not unreasonable for all incumbents to get an item right, so item means of .90 and above on pretests are acceptable. However, items with means below .50 do merit further consideration and will be more discretely analyzed.
      3. Additional information about individual test items will be collected from incumbents or subject matter experts. This information includes item appropriateness for the examination and subjective item difficulty information.
        1. Item appropriateness ratings can be used to calculate content validity ratios (CVRs; Lawshe, 1975). CVRs provide evidence that the examination validly assesses KSAs appropriate to the classification. CVR= (n-N/2)/(N/2), where:
          1. n= the number of respondents who regard the question as relevant to the target position
          2. N= the total number of respondents
        2. Subjective item difficulty ratings will be used to evaluate the difficulty of items when samples are too small to estimate item means or b-parameters (i.e., samples less than 30).
      4. As new items become available, they will be “seeded” as defined in Section 1.3(b)(4)(B)(2) above.
      5. The results of the pre-test will be stored on a secure server and shared with participating employers. All information collected will be securely stored and maintained.
    7. Validity
      1. Content validity is established through job analysis techniques and the pre-testing passing rate of incumbents, as described above. This process provides a statistical link between the test elements and the behaviors and/or work product components of the job.
      2. In most cases, criterion validity will be established by the passing rate for the probationary period for the classification.
      3. In some rare cases, criterion validity will be established by correlating test scores with objective or subjective indicators of job performance. This will be done only when sample sizes are large enough (e.g. N > 60) and when sufficient performance related information is available.
      4. In some cases (e.g., N < 30) samples are too small for any meaningful index of criterion-related validity to be established.

      Exception: A proposal may be approved prior to circulation if the Executive Director identifies an immediate need for its use. All employers and members of the State Universities Civil Service Advisory Committee will subsequently be notified.

    8. Security and Confidentiality in the Examination Development Process
      1. During the examination development process, participants will be trusted with confidential draft examination material and will be involved in confidential conversations.  All participants shall keep examination materials confidential and secure.  Any person, including, but not limited to a University System Office staff member, Designated Employer Representative, other campus/agency Human Resource employee, subject matter expert, Union Representative, or incumbent, involved in any step of the examination development process, who discloses, distributes, wrongfully maintains, or secures materials utilized in the development of any civil service examination shall be in violation of section 37 of the State Universities Civil Service Act (110 ILCS 70/37).
      2. During the course of examination development, the University System Office employees assisting with the examination development will instruct all persons participating in the exam development process on the proper maintenance, distribution, and possible destruction of all final and draft examination documents and materials. The employer's Human Resource staff will not involve any other persons (ex: subject matter expert, incumbents) without direct notice to the University System Office. Should the employer's Human Resource staff involve any other personnel (ex: subject matter expert, incumbent), the employer's Human Resources staff will instruct those employees on the proper maintenance, distribution, and possible destruction of draft examination materials and provide notice of this involvement to the University System Office.
      3. Any violation of the State Universities Civil Service Act, and, by extension, these security procedures, is considered a criminal offense and punishable under 110 ILCS 70/46. If a breach of security is discovered, the University System Office may be forced to discontinue the use of the exam in question, void all employment registers for that classification, and freeze all related employment activities in the affected classification until such time that a new exam can be developed.

References

Gatewood, R. D., & Field, H. S. (2001). Human Resource Selection (5th Ed.). Mason, OH: South-Western Thomson Learning.

Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563-575.

Traub, R. E. (1994). Reliability for the social sciences: Theory and applications (Volume 3). < Thousand Oaks, CA: Sage Publications, Inc.