Chapter 4A: The Test Assembler Tab – Information and Online Delivery Tabs

Table of Contents

 

After all of the items you need have been added to your banks, you can begin to assemble tests. Tests are assembled and managed in the Test Assembler tab, the second tab in FastTest. You begin by creating a new test in this tab, then returning to the Item Explorer tab to add items to the test.

  • Tests are organized into groups (folders).
    • Before creating any tests, it is recommended that you create a group into which you wish the test to go into by clicking the New button and then selecting New Group.
    • Then, either right-click on the group folder or select the group and click the New button, and select New Test.
      • For example, if you are going to be creating vocabulary tests, create a group named “Vocabulary” as shown in Figure 4.1.
Figure 4.1: The Test Assembler Tab

test assembler tab

Creating a New Test

When you create a new test, or edit an existing test, a new dialog (the Edit Test Options dialog) will open with tabs for specifying important information about the test (Figure 4.2).

The Information Tab

Figure 4.2: New Test Dialog – Information Tab

information tab

  • The first tab is the Information tab, which contains fields for basic descriptive information such as the test name and description.
    • The “Answer Marker” refers to how you want your test answers (also known as options or alternatives) labeled when the test is delivered to examinees.
      • You can have letters (ABCD) or numbers (1234), or no markers if delivered online (examinee clicks the answer text or clicks on a radio button or check box; see Online Delivery tab).
      • This will not override what you have selected for the individual items, but will be applied if you indicated for each item that the marker would be specified at the test level.
  • You can also set the test to being public or private. A private test can only be edited and used by the author.
    • For example, the various teachers in a school can only use and modify their own tests and not everyone else’s tests.
  • Tests can have custom fields which allows a workspace to add any additional fields that may be missing from FastTest.
    • The example workspace above has one custom field called Grade Level.

Not all of the fields in the Edit Test Options dialog need to be specified immediately – you can return to complete the fields after adding items to your test.

The Online Delivery Tab

Figure 4.3: New Test Dialog – Online Delivery Tab

test assembler online delivery tab

The Online Delivery tab (Figure 4.3) specifies information for online delivery of the test.

  • Select the type of the test from test method drop down. Detailed explanation of each test method is discussed in the next sections.
  • You can specify the text the examinee presses to submit the test and the text of the confirmation message the examinee agrees to before it is submitted.
  • A time limit can be set here in minutes (leave at zero for an untimed test). The test will be submitted automatically once that time limit is met.
  • Status icons used by an examinee for seeing the completion status of the questions or jumping to specific questions can be shown or hidden.
  • The test can optionally be taken by examinees anonymously, meaning they do not need to provide their name.

A test has many timing options.

  • A time limit can be set on the test.
    • Once the time expires during a test, the examinee’s test will automatically be submitted, and any items left unanswered will be counted as skipped (even required items).
    • FastTest supports extra time accommodation through the use of a multiplier.
      • If a test has a time limit and allows for extra time accommodation, examinees that require extra time will have a new time limit equal to the old time limit times the multiplier.
        • For example, if the time limit for a test is 1 hour and its multiplier is 1.5, examinees requiring additional time would have a time limit of 1 hour 30 minutes. The multiplier will also apply to any other time limits in the test (i.e., in a test section).
    • If no time limit is set, the elapsed time can be shown to the examinee.
    • Time that the examinee spends on a test will be recorded regardless of what options are used.

 

  • Checking the box ‘Allow examinees to log out of the test’ will create a new button for exiting the online test before it is completed.
    • The examinee can then log back into the test at a later time by using the original test code.
  • Checking the box Show test name will display the test name in the delivery engine.

Enabling Access to Reference Materials

FastTest can provide a link for examinees to have access to reference material during an exam.

  • Checking the box Give examinee access to reference material during test will cause the Select Asset Folder button and Reference Material Link Text text box to appear

access to reference material

  • Select which asset folder to provide access to
    • Note: Only PDF assets within the chosen folder will be accessible to examinees
    • Folder hierarchy will be maintained and any PDFs located in subfolders will be available to examinees
      • Option to select subfolder only
  • Option to individualize the reference material link text
    • Defaults to ‘Reference Library’
  • Click Save
  • During an exam, the Reference Library link will be located above the item navigation

ref library link

  • Clicking on the link will open a new window with a list of the PDF assets from the chosen asset folder

Ref library link window

  • Examinees can click on the the file name to open the document
  • To return to the list, click on Open a Different Document located in the upper right corner
  • Click on the X to close the reference material window

reference opened

Chapter 4B: The LOFT Options Tab

Table of Contents

 

This tab includes options unique to a LOFT.

Figure 4.4: New Test Dialog – Loft Options Tab

LOFT options tab

A linear-on-the-fly test (LOFT) is a test delivery method where every examinee receives the same number of items, but a different set of items than other examinees.

  • This makes it halfway between a traditional linear test (that has the same number and the same set of items) and an adaptive test (that has a different number of items and a different set of items).
  • With LOFT, every examinee will have their own test form constructed when they take the test, based on the specifications you provide, which greatly enhances exam security.

LOFTs can be constructed by selecting a total number of items, or by balancing across content domains.

  • For example, under the standard model of linear tests, the same 100-item test form with 20 items in each of the five domains is delivered to every examinee.
  • With LOFT, you can establish a pool of 150 items with 30 in each area.
    • FastTest’‘s intelligent test generator will custom-build a test for each examinee by selecting 20 out of 30 items in each area.
      • Because every person will have a different test, the content is much more secure.
      • Additionally, the assessment has greater perceived fairness because all the tests will be built from the same specifications, and every examinee is presented the same number of items.

The LOFT delivery method is specified when editing test options.

  • When creating a new test, or editing existing one, go to the Online Delivery tab of the Edit Test Options menu and select LOFT from the Test Method drop-down menu.
  • After selecting the LOFT method, this will create a new tab entitled LOFT Options.
    • This new tab allows test editors to edit the Content Constraints and other specifications for this test method.

Note: If there are no items in the current test, then you will need to return to Edit Items to add test questions. Items will need to be added to the test in order to set the options for this test method.

There are two ways a LOFT can be used:

  1. Build the test only by selecting a total number of items
  2. Build the test according to the Content/Domain specifications: use the Edit Content Constraints button to choose a target number of items for each individual Test Section.
    • First, Test Sections need to be built that implement the desired content outline.
    • The user will then need to return to the LOFT Options tab to check the box next to Enable Content Constraints in order for the test to take the constraints for the precedence.
    • For the Constraint Input Type, they can be based on a percentage or by the number of items.
    • Enabling content constraints in the Scoring tab would allow calculation of subscores in the test, which is often necessary for student feedback reports. To allow subscores in the test, go to the Scoring tab and check the box towards the bottom entitled Enable Subscores, as seen below. For the Constraint Input Type, they can be based on a percentage or by the number of items.
Figure 4.5: Edit Content Constraints with Number of Items Mode

enable content constraints number

Figure 4.6: Edit Content Constraints with Percent Mode

enable content constraints

When all of the settings have been determined, click Save at the bottom of the dialog box.

Chapter 4C: The CAT Tab

Table of Contents

 

This tab (Figure 4.7) includes options unique to a CAT.

Figure 4.7: New Test Dialog – CAT Tab

CAT tab

CAT is a delivery method that uses IRT scoring to deliver items intelligently to each examinee.  The test adapts itself to each examinee, so that high-ability examinees do not waste time on easy items and low-ability examinees are not discouraged by difficult items.

  1. Item pool – a set of items calibrated with a psychometric model (e.g., IRT);
  2. Initial θ (ability estimate) – where the algorithm should begin;
  3. Item selection method – the process of matching items to examinee;
  4. θ estimation method – the mathematical approach to determining based on responses to items that have been administered to an examinee;
  5. Termination criterion – the mathematical and/or practical constraint that must be satisfied for an examinee’s test to end.

The item pool is defined by you in the process of developing the test in FastTest. This leaves the remaining four components to be specified.

Initial θ
CAT selects each item for an examinee individually by determining which item in the bank is most appropriate for their ability level. The ability level (θ) estimate is updated after each item.

  • But for the first item, there is no θ estimate because no items have been administered yet – there is no way to score the examinee, so a temporary θ must be assigned.
  • The simplest method is to assign every examinee to start at the same level, typically the average of the distribution, often θ = 0.0 (Option 1).
    • This has the drawback that every examinee will see the same first item unless there is randomization in the item selection (next paragraph).
    • A simple way to address this is to randomly pick an initial θ for each examinee within given bounds (Option 2).

Item Selection Method
A typical CAT adapts itself to each examinee by selecting the item with the most information at the current θ estimate (Option 1).

  • This is the most efficient way to deliver items that are most appropriate for each examinee and obtain a precise final score with as few items as possible.
    • However, some testing programs wish to insert some amount of randomization (Option 2).
    • This randomization will, instead of picking the single item with the highest information, identify x number of items with the highest information, and randomly select between them.
    • This is extremely useful in two situations.
      • First, if the test is high-stakes and there is the possibility that the first few items will become well-known amongst examinees. Utilizing randomization will greatly increase the number of items being utilized across the population at the beginning of the test, aiding in security.
      • Second, if examinees will be taking the test more than once, the second test will likely lead down the same path into the bank; randomization will reduce the amount of items that are seen again during the second test.

θ Estimation Method
FastTest is designed to use maximum likelihood estimation (MLE) of θ. Because MLE is undefined for nonmixed response vectors early in a test, where examinees have all incorrect or all correct responses (always the case after the first item), Bayesian maximum a posteriori estimation (MAP) is utilized as a temporary θ estimation in those cases. The test reverts to the less biased MLE method which a mixed response vector is obtained (at least one correct and incorrect both).

Termination Criterion
There are two approaches to terminating the CAT.

  1. Like conventional tests, a CAT can be ended after an arbitrary number of items that is the same for each examinee (e.g., 100 items for all examinees). This is Option 1.
    • However, the sophistication of CAT also allows for variable-length testing, where the test is concluded when a certain criterion is satisfied.
  2. Option 2 has two choices.
    • First, you can end the test when the standard error of measurement (SEM) falls below a certain point. This ensures that all examinees have scores of equal precision, something which is nearly impossible with fixed-form conventional test.
    • The second choice is to end the test when no items remaining in the bank provide a certain level of information, which is designed to ensure that all items appropriate for a given examinee are used.
  • Additionally, you can set a minimum and maximum for the test length. A minimum is useful to ensure that all examinees receive at least 20 items, for example. A maximum is intended to prevent the CAT from continuing until the entire item pool is used, which it will if either choice for Option 2 is too strict.

When a test is delivered via CAT, certain options are locked elsewhere in the Edit Test window.

  • For example, the option of allowing examinees to mark items for review is disabled.
  • Scoring is also fixed to IRT scoring. Concordantly, all scored items in a CAT must have an IRT model and parameters before the test can be protected.
  • Unscored items (such as instructional and survey items) can also be placed in a CAT, but they will appear before and/or after the scored items.
    • To make unscored items appear before the scored items, place the unscored items adjacent as the first items in the test item list.
    • Any unscored item that is not grouped at the beginning of the item list, will appear at the end of the CAT.

Content Constraints
While computerized adaptive testing (CAT) is based on item response theory, which assumes an unidimensional trait, many testing applications have content areas or domains across which they desire to spread items.

  • For example, a mathematics test might have a blueprint which calls for 75% algebra items and 25% geometry items.
    • When building a traditional fixed-form test, this can be explicitly controlled for; a 20-item test would have 15 algebra items and 5 geometry items.
    • CAT exam can be of variable length, so it needs to dynamically keep track of the item content distribution and select the next item appropriately.
      • If a CAT exam had delivered 19 items, 15 of which were algebra, then the CAT algorithm must know to select a geometry item next.

CATs implement this by constantly keeping track of the target proportions (0.75 and 0.25 in this example) and the actual proportions at any given time in the test.

  • The target proportions are specified by the test designer based on the blueprint of the exam, in the Content Constraints dialog window of the CAT tab.
  • Note that the exam pool must first be constructed according to the blueprints; that is, each item is specified as an algebra or geometry item.

Clicking the Edit Content Constraints button at the top of the CAT tab will bring up the following dialog. Note: this button only appears upon editing an existing test, not when creating a new test.

Figure 4.8: Content Constraints Dialog

cat content constraints

To enable Content Constraints, check the box at the top. The dialog lists all of the test sections defined for the test as well as the number of items in each test section. Each test section has an associated Target Percentage, which reflects the percentage of total items administered that should come from that section. The Target Percentages must add up to 100; you will be prevented from deleting a test section that has a positive Target Percentage assigned to it.

Chapter 4E: The Test Assembler Tab – Scoring, Results Reporting, and Demographics Tabs

The Scoring Tab
Figure 4.9: New Test Dialog – Scoring Tab
scoring tab

 

The scoring tab presents sophisticated options for scoring examinees. There are three approaches to scoring:

  1. Item response theory (IRT) – scores examinees directly on a standardized scale (Dichotomous or Polytomous). You can use maximum likelihood estimation or Bayesian maximum a posteriori estimation. To learn more about IRT scoring, visit Thompson (2009).
  2. Sum score (Number Correct) – Total number of points calculated by summing answer weights.  In the case of each item being 1 point, this is the classical Number Correct score.
  3. Sum score (Number Incorrect) – The number of items answered incorrectly.  This is useful if you want to do linear scaled scoring with a negative slope.  For example, if you have a 100-item test, your scale could be Slope=-3.00 and Intercept=600, then scaled scores would range from 300 to 600.
  4. Sum (Proportion) Score – Scores can be reported as proportion correct using the Sum (Proportion) Score. Use the linear score conversion option with a slope of 100 to multiply a decimal fraction (0 to 1) proportion to report the score as a percentage.  Cut scores use the proportion fraction for pass and fail designation (e.g. “.75”).

In addition to these base methods, you can apply scaled scoring conversions. There are four options:

  1. Standardized score conversion converts one standardized scale to another standardized scale. For example, with IRT you might have a mean of 0 and standard deviation of 1.0; this might be converted to a mean of 500 and standard deviation of 100.
  2. Linear score conversion applies a linear function with a slope and y-intercept. This can be applied to the sum and sum (proportion) scores.
  3. Regardless of previous options, you can set your scale to have a minimum and maximum.
  4. Also regardless of previous options, you can set the level of rounding on your test.

An additional option on this tab is the raw cutscore. This is for tests where examinees are classified as Pass or Fail.

  • Note that the cutscore is specified on the raw (original, unscaled) metric. So in the IRT example above, a cutscore at the average would be 0.0, not 500.

The Enable Subscores option is discussed in Enabling Subscores.

The Results Reporting Tab

Figure 4.10: New Test Dialog – Results Reporting Tab

results text2

The Results Reporting tab allows you to design customized score reports for your examinees.

  • Type in any static text, and then insert special text using the drop-down menu and button provided.
    • For example, in Figure 4.10, the report is individualized to include the examinee name, number correct score, and number of items on the test.
    • The numerical values to be entered in the report are selected using the Special Values pulldown list and the Insert button.
    • For example, the test in Figure 4.10 would produce a score report for each examinee that would include the individual’s name, the number of items answered correctly, and the number of items on the test. These fields are populated with the actual values for each examinee which the score report is created at the end of the test.

To provide subscore information to test takers after their tests, follow these steps:

1)     Create a table in Word that contains the subcategories and additional fields that you would want displayed to the test takers. You can populate static information in the fields that would be the same for all examinees that take that test (for example, include the Content Areas in the Word doc.).  One sample chart might look like this:

subscores table

2)      Go to the particular test, right click, and select “Edit Test Options”.

3)      Go to the Reporting tab and paste the table you created into the Rich Text Editor.

4)      From the drop-drown menu below the Rich Text Editor, scroll to a particular subsection of the test, and select the value you want to insert.

5)      Put your curser in the table where you want the value inserted, and then click Insert.

6)      Do this for each of the fields in the table.  Note that you only have to do this once per test.  I typed in the subscore totals into Word, but you could use the drop-down to populate that column in FastTest instead also.

The Demographics Tab

Figure 4.11: New Test Dialog – Demographics Tab

demographics tab

The Demographics tab enables examiners to collect demographic information about the examinees. This information is tied to the examinee as a person rather than with a specific instance of a test. Use this tab to construct the form that the examinee will be asked to complete prior to beginning the test.

The examinee will be presented with a form consisting of any number of built-in or custom fields.

  • FastTest has a number of built-in fields that you can use such as First Name, Last Name, and Email.
  • Workspace administrators can also define Custom Examinee fields that allow workspaces to have their own fields (see Users).
    • To add a field, click the Add link, which will move the field into the Active Fields column on the right and show additional options.
    • Fields can be required to be answered by checking the Required checkbox.
    • If a field may already have data associated with it and you do not want the examinee to change it, you can check Read-Only If Has Value to show it to the examinee without allowing it to be changed.
    • The form presented to the examinee will have the fields in the same order you define (click the Up or Down links to change the order).
    • To remove a field, click the Remove link.
      • If all fields are removed, future examinees will no longer see the screen with this form, but any data collected with it will be preserved.

 

For a high level overview of test configuration options, please view the FastTest Configuration Options chart.

Chapter 4F: Test Sections

Table of Contents

 

A test is made up of test sections, and test sections contain items. Test sections provide for a logical grouping of items.

Creating Test Sections

New tests will contain a single test section by default called “Test Section.” Additional test sections can only be created by locking the test for edit.

  • To lock the test for edit, click the test and select Edit → Edit Items. The display will be adjusted slightly and the New Test Section button will appear alongside the other buttons.
  • Icons for deleting and moving a test section will appear on its header row, as well.
  • Each test section gets a row in the test item list which tells which items are contained in that section and provides buttons for interacting with the test section.
  • To edit a test section, click the pencil icon on the right of the test section row header.

The Information Tab

Figure 4.12: Test Section Dialog – The Information Tab

test section info tab

  • The information tab contains name and description fields. A test section can also be set to a fixed position within the test. If its position is fixed, it will not be moved when the order of a test’s sections is randomized.

The Online Delivery Tab

Much like the online delivery tab at the test level, the online delivery tab contains options for the online testing environment.

Figure 4.13: Test Section Dialog – The Online Delivery Tab and Options

 test section online delivery combined

  • Display Options
    • Items in the test section and their answers can have their order maintained as defined or randomized per examinee.
      • Note: Instructional items will never be randomized and will always stay in the same position.
    • Answers can be selected via radio buttons/checkboxes or via text hotspot.
      • Radio buttons/checkbox use the traditional HTML widgets for selecting an answer.
      • Text hotspot allows the examinee to merely click the answer text in order to select it.
      • When a test is delivered to an examinee, each test section will get a button that, when clicked, will show the items in that section to the examinee. By default, the section is labeled x-y where x is the sequence number of the first item in the section, and y is the sequence number of the last item in the section.
        • Test sections have an option to use the test section name as the label for it’s button rather than its range of items.
        • Test sections also have the option of displaying all of their items on a single page.
          • Note: items that reference assets (via a link or using a split-pane layout) will ignore this option.
  • Timing Options
    • A time limit can be placed on this particular section.
      • Once the time expires, the examinee will be sent to the next test section or the first test section starting from the beginning.
      • If there is no other test section, the test will be submitted.
      • Adding a time limit will also prevent the test section from being returned to once it is completed. This prevents an examinee from toggling back and forth between test sections to cheat the timer.
        • Note that a time limit can be placed on test sections AND the test as a whole. If both are specified, two timers will appear to the examinee. It is also important that the test level timer be greater than the sum of the test section times. However, if there are time constraints, it is recommended that only test sections are timed OR only the test as a whole is timed, but that they are not both timed.
  • Navigation Options
    • Allows examinee to return to items within the section
    • Option to allow examinee to navigate between test sections if there are more than one
    • Examinee can be forced to answer required items (designated via a checkbox from the list of test items) before advancing pages.
  • Audio/Video Options
    • Controls settings for audio and video assets within the test section being edited
      • Option to allow the asset(s) to only play one time during a test
      • Can set assets to play automatically when the item is clicked on
  • Tools/Add-ons
    • Checking box(es) will provide the tool or add-on to an examinee during a test
      • Calculator
      • Ruler
      • Allow comments
      • Show protractor

Once a test is protected, these test section options can no longer be edited except by a Workspace Administrator.

Chapter 4G: Selecting Items for a Test

Table of Contents

Once you have created the test and specified the basic information, the next step is to add items to test sections.

To Add Items

  • Click the Edit → Edit Items button, or right-click on a test and select Edit Items.
    • This will then bring up a list of items on the test (Figure 4.14), which of course will be none if you have just created the test.
    • The list of items is also visible simply by selecting a test in the tree on the Test Assembler tab
    • Adding and deleting items on the test is not possible unless you click Edit → Edit Items, locking the test for editing.
  • Click on the Item Explorer tab to access the items you want to add to the test.
Figure 4.14: The Test Item List

test item list

Figure 4.14 displays some additional options that items have once they are added to a test.

  • The Scored checkbox indicates if the response to this item should be included in the score calculation.
  • The Required checkbox indicates whether this item is required to be answered.
  • The Force Page Break checkbox allows you to insert a page break after a specific item.
  • The Reverse Scale checkbox applies to Likert-Type items and allows you to use the reverse of what the examinee selected.
    • For example, if you have a five point scale with weights of 1, 2, 3, 4, and 5, applying reverse scale would cause the first option to use a weight of 5 instead of 1.
  • The checkboxes in the column headers can be used to check/uncheck all checkboxes for all items in the test.
  • Once the test is open for adding/deleting items, you then need to go back to the Item Explorer tab to look for items and add them to the test.
    • You can go there by clicking the green Add Items button, or by simply clicking the Item Explorer tab.
    • There are two ways to select items in the Item Explorer: browsing through the bank or searching on specific criteria.

Selecting Items by Browsing You can assemble a test using any of the banks in a workspace by selecting items from any of the categories in those banks. Navigate through the workspace by clicking on banks or categories, and right-click → Show All Items to see the items in a given category.

  • To add an item to your test, simply click on the item and then click the green Add to Test button that is now visible above the list or right-click and select Add to Test.
  • You can also select multiple items in the list using the CTRL and SHIFT keys.
  • Items that have been added will have a green plus sign next to them.
  • If your test has multiple test sections, you will be prompted to select which test section the items should be added to.
Figure 4.15: Item Explorer While Adding Items

item expolorer while adding items

Selecting Items by Searching You can also select items for your test by having FastTest search your item banks for items meeting criteria that you specify. There are two ways to search.

  • To perform a quick search for certain item names, type the text into the Search Items box in the upper left, then click the magnifying glass or press Enter.
    • For example, in Figure 4.15 you could also pull up a list of all algebra items by searching for “alg” since that is in each item name.
  • The second way to search is the Advanced Search. Use this when performing sophisticated test assembly based on item statistics or IRT parameters.
    • Click on Advanced Search below the quick Search Items box, and the window in Figure 4.16 will appear.
    • You can then search for text again, by item status, item assignments, date created, date modified, and/or by ranges of values for all the relevant statistics.
    • Your results will then appear in the item list (Figure 4.15), at which point you can select which of the qualifying items you would like to add to the test.
    • You can add to a previous search by unchecking the “Clear previous results” button.
Figure 4.16: Advanced Item Search Dialog

advanced item search

 

  • Note: If you want to view the items in a test in the Item Explorer tab, right-click the test and select Show in Item Explorer
    • Allows for easy access to items included in a test without having to search banks

Chapter 4H: Adding Multiple Items at Once, Building Random Test Forms, and Inserting Instructions

Table of Contents

Adding Multiple Items at Once

Whether browsing through your bank structure or perusing a list that has been generated by a search, you can add multiple items at once.

  • For example, suppose you search for Algebra items with an IRT a parameter of 0.8 to 1.0, and 10 items were returned.
  • Using the CTRL key, you can highlight any number of items in the list, then click Add to Test only once.
    • The SHIFT key acts as in Microsoft Windows, selecting all objects in a list between the two points specified.
  • Items added this way will be added in the same order that they are selected.
  • Items added to a test by adding an entire bank or category will be added in alphabetical order by item name.

Building Random Test Forms

For item lists either from browsing or searching, you can also randomly select any number of items.

  • Right-click on the list and choose Select Random. You will then be asked to specify how many items to select.
  • This process will highlight as many items in the current list as you specified.
  • If you like the selection, you can click Add to Test.

This approach can be used to build entire forms randomly, or used in combination with the search criteria for stratified sampling. To build an entire form randomly, make sure the list has all items you wish to consider available.

  • For example, if you want to build an Algebra test, you might drag the Algebra folder to the list window, so that all Algebra items are displayed.
  • If you want a test of 50 items, then choose Select Random → 50 items.
    • Note that the items will only be selected from the current page of items. You may have to increase the maximum number of rows per page via the dropdown menu above the item list.

To use stratified sampling to meet strict requirements, utilize the Search feature to narrow down the list first.

  • Again, suppose you search for Algebra items with an IRT a parameter of 0.8 to 1.0, and 10 items were returned.
  • You could then randomly add 5 of those items to your test.
  • You then move on to Geometry items that meet the IRT requirement, and select 8 items there if that is what you need.
  • The Search function serves to narrow down the items in the bank that meet given specifications from your test blueprints.

Inserting Instructions Into Your Test

FastTest inserts instructions into the test as items.

  • Simply create an “item” and write your instructions as its text, then change the item type to Instructional.
  • Instructional items can be entered into the test like any other “item” and moved to whatever location in the test you desire.
  • You can also have as many instructional items in your tests as you like.
  • For paper-based testing, if you want to start a new page before an instructional “item,” simply insert a page break at the appropriate location when viewing your test.

Also for paper-based testing, this approach can be applied to “testlets,” or test items that consist of a passage of information, followed by a number of questions about that passage, such as reading comprehension items.

For this type of item, create a separate (instructional) item for the passage and separate items for each question to be asked about the passage.

  • It is recommended to store all of these items in a subsection of a bank so they can all be selected at once for inclusion in the test.

Chapter 4I: Examining the Statistical Characteristics of Your Test

Table of Contents

After assembling a test or in conjunction with the assembly process, you can examine the statistical characteristics of the test.

  • FastTest will provide you with distributional statistics on the items in a test as you assemble it, and additionally displays the test-level IRT functions if you have IRT parameters.
    • You can monitor the statistical characteristics of your test items as you add and delete items from your test, until you are satisfied that it has the statistical characteristics that you desire.
    • All statistics and displays are automatically updated as you add or delete items from your test.
Figure 4.17: The Test Statistics Dialog

test-statistics

  • To view the summary statistics, either click the Statistics → View Test Statistics button, or right-click on a test and select View Test Statistics. This brings up the dialog in Figure 4.17.
    • The mean and standard deviation of your test statistics are presented on the left, with the IRT functions on the right.
    • To obtain actual values of the IRT functions, click the View Data button in the upper right.

Two additional piece of information are also on this tab.

  • First, the mean Angoff represents the cutscore for the test, as recommended by your Angoff panel. For a more detailed analysis of Angoff results, see the ASC Angoff Analysis Tool webpage.
  • Second, the estimated mean, standard deviation, and reliability of the number-correct scores are presented in the lower left.

Assembling Tests Using IRT

If your items have IRT parameter estimates, you can use the search capabilities of FastTest to assemble a test with a desired test information function (TIF) or conditional standard error of measurement (SEM) function.

  • The TIF shows the precision/information for a test as a function of the IRT θ (trait) variable.
  • Test information is the sum, conditional on θ, of the item information functions for all items in the test; the SEM function is the reciprocal of the square root of the TIF.

Suppose you wanted to create a test that was to be used to make a dichotomous classification of examinees. Such a test might be used to measure mastery in an educational environment or to select applicants in a personnel selection environment.

  • In IRT terms, such a test should have its highest information at the mastery/selection cutscore point on the θ continuum, while little information is necessary for scores far above or below the cutscore.

The item search capabilities of FastTest allow you to easily identify those items. Using the advanced search, specify a narrow range of θ around your θ cutoff score.

  • For example, suppose you wanted to specify θ = 1.5 as your cutscore. You might specify your first search range as a b parameter of 1.1 to 1.9. The search function will display all items that meet these requirements, which you can then browse to select items.
  • Then return to the Test Statistics dialog to view the resulting TIF. If modification is necessary, return to your item search results to add/delete items.

Importing and Exporting Item Statistics for Items in Your Tests

To utilize the item and test statistic functionalities described above, it is of course necessary to have item statistics stored in your banks. You can also export a CSV file of the items statistics for further evaluation. FastTest provides functionalities for both importing and exporting item statistics.

  • Item statistics can be imported in two ways.
    • First, they can be imported as part of the general import process, when importing previously used banks.
    • Second, the item statistics for a specific test can be uploaded in the Test Assembler tab.
      • For example, you might write a new test, deliver it to your examinees, and then analyze in a specially designed program like Iteman or Xcalibre. You can then upload the statistics from that test.
  • To upload item statistics from a test, you must first export the current statistics of the test, which would either be blank or contain previous values.
    • To export, click the → Statistics → Export Item Metadata button, or right-click on a test and select Export → Export Item Metadata.
    • Save that file to your computer and fill in the new statistics where needed.
    • The file must be saved as a CSV file.

csv-save-as

    • Then, return to the Test Assembler tab and click the Statistics → Import Item Metadata button, or right-click on a test and select Import → Item Metadata.
  • Iteman Output Import
    • Locate the table of item statistics in columns. Copy the values for P and Rpbis
    • Paste those values into the exported template (Diff and Disc) and save on your computer
    • Import the saved file into FastTest by Import -> Item Metadata

Chapter 4J: Reordering Items and Creating Alternate Versions, Deleting Items From Your Test, Protecting/Locking a Test, and Exporting Item Bank Fields For a Test

Table of Contents

Reordering Items and Creating Alternate Versions

There are two ways to reorder the items in your test – manually or during online delivery.

  • Manually
    • Start by making the first version of the test as you normally would.
    • After you have the items in the order you want them for the first version of the test, save that test under another name of your choice (e.g., My Test, Form B).
      • To do this, right-click on a test and select Save Test As.
    • Then open the second version, click the Scramble button, and save that version of the test.
    • Test Sections can be scrambled relative to other test sections and items within test sections can be scrambled.
    • Test Sections can be marked as fixed so that they do not change position when a test is scrambled.
      • Note: Instructional items will always remain fixed.
    • Repeat this process as many times as you like for any test to create any number of scrambled alternate forms.
    • You can also manually move the items up and down in position to scramble.
      • Moving items manually can be done by drag-and-drop in the test item list when the test is open for editing, or by selecting an item and then using the Move button (or right click menu).
  • Online Delivery
    • To randomly scramble the item order for each examinee during online delivery, change the Item Order drop-down menu on the Delivery tab of the Edit Test Section Dialog.

Deleting Items From Your Test

As you assemble and refine your test, you might decide to delete an item from your test.

To delete an item from a test while in the Test Assembler

  • Open the test item list by right-clicking → Edit Items
  • Select the item to be deleted
  • Click on the red Delete button above the test list
  • To be sure that you wanted to remove that item from the test, you will be asked to confirm the deletion.

Protecting/Locking a Test

When you have the final version of a test, you might want to protect or lock the test so that it can no longer be modified. Protecting a test makes it impossible to further modify both the item list and the test options of the test.

Before protecting a test, it is a good idea to preview the test.

  • A test can be previewed by right-clicking and selecting Preview Test or by selecting the test and clicking the Preview button.
  • Options are then given to view the test without highlighting the keys and/or with rationale shown.
  • The test will then open  in the view that the examinee would see it online.
  • The only difference is that a test in preview does not have its answers saved and, therefore, cannot be scored.
Figure 4.19: Literature Test 1 is a Protected Test

4-19

  • To protect a test, select the test in the Test Assembler window and click the Protect button, or right-click and select Protect Test
  • A lock icon will then appear next to the test (Figure 4.19).
  • A test can only be unprotected by a workspace administrator and only if no examinees have taken it.

Protecting a test is required before being able to schedule examinees to take the test online.

Exporting Item Bank Fields For a Test

You can obtain a list of the information in any (or all) fields for the items in any test.

  • Right-click on a test in the Test Assembler and select Export → Export Items.
  • A dialog will open with check boxes for the user to indicate which fields to export, as seen in Figure 4.20.
  • Optionally, you can remove all of the HTML formatting from the items.
Figure 4.20: Test Item Export Dialog

4-20

Once the export is generated you will receive an email.  The export will be available for download under the Reports Manager tab. If you are already logged in you made need to click the button on the right to refresh the list of generated reports.

Figure 4.21: Report Manager – Recent Exports

4.21

Chapter 4K: Subscores

Table of Contents

 

A common requirement for tests, surveys, and other measurements is the use of subscores, also known as content areas or domains. Subscores are used to provide more detailed feedback to the examinee or the test sponsors than is possible with a single total score.

  • For example, a math exam might consist of items on geometry and algebra. Some tests might only calculate the total score, and require only that the test have half of its items on geometry and half on algebra. However, while the total score is of primary interest, FastTest can also provide scores on geometry items only and algebra items only, to identify the ability of an examinee in each topic. Moreover, FastTest provides the capability to design a complex hierarchical system of subscores to reflect a complex spanning of topics. All scoring functionality available with the total score (IRT, scaled scores, min/max) is also available for subscores.

Enabling Subscores

To enable subscores for a test

  • Click the test and select Edit → Edit Test Options to open the Edit Test Options Dialog
  • On the Scoring tab, a checkbox near the bottom labeled Enable Subscores can be used to toggle subscoring on or off.
  • When you enable subscores and click save, you will be presented with a new Scoring pane between the test tree and the test item list.
  • The root node (called Main Score), represents the test’s overall score and includes all of its items.
Figure 4.21: A Test with Subscoring Enabled

subscoring enabled

Viewing/Editing Subscores

Subscores act similarly to item categories: they can hold items and can be created in a nested structure. Each subscore has a name, description, and a set of scoring parameters. The scoring parameters are the same as the scoring parameters available at the main score level (found on the Scoring tab of the Edit Test Options Dialog).

Figure 4.22: The Edit Subscore Dialog

edit subscore dialog

  • If the test is locked for editing, you can create new subscores and edit existing ones.
  • To add a new subscore, highlight the test, click on Edit and select Edit Items (or right-click on the test and select Edit Items)
  • Select the Main Score node (or another subscore) and click the button New Subscore (or right-click the node and choose New Subscore).
  • To edit an existing subscore, select the subscore and click the button Edit Subscore (or right-click the subscore and select Edit Subscore).
  • Subscores can also be moved via drag and drop.
  • Note: After a test is protected, the configuration of subscores cannot be changed.
Figure 4.23: Modifying the Items in a Subscore

modifying items in subscore

Assigning Items to Subscores

When you click on a subscore, its items will be shown in the table and the caption above the table will display which part of the test is currently being viewed. This is shown in Figure 4.23 on the Viewing/Editing Subscores page.

  • Adding items to a subscore
    • click the Main Score node to show all of the test items.
    • Select which items you want to add and drag them into the appropriate subscore on the left.
    • To move an item from one subscore to another, click its current subscore to display the items, select the items to be moved, and drag them into the target subscore on the left.
    • Note that depending on which list of items you start with (the Main Score’s items or a subscore’s items), dragging and dropping will either add to a subscore or move from one subscore to another.
  • Removing items from a subscore
    • click the subscore to display its items, select the desired items, and click Delete (or right click and select Delete Item(s)).
    • Similarly to moving, depending on which list of items you are working with (the Main Score’s items or a subscore’s items), deleting will either delete from the test itself or delete from the subscore.
    • After a test is protected, the configuration of items in subscores cannot be changed.

Automatic Subscore Tags

Another way to create and assign items to subscores is by utilizing the Automatic Subscore Tags functionality.

  • Tags are initially created at the Workspace Overview level (Admin access required)
    • Click Configure, then select the Standards tab
      • Click on the + located to the right of the Tag Group field to open the window shown below
        • Enter a Tag Group Name and Description
        • Check the Active in Workspace and Create Subscores for tags boxes to enable the function in the workspace
        • Click Save to close the window and return to the Standards tab
      • From the Standards tab, click the + located to the right of the Tag field to open the window to create a new tag.
        • Enter a name and description
          • This tag will be the label used for the subscore within the workspace
        • Click Save and Next to create additional tags, or Save and Close to return to the Standards tab
        • Once all tags have been created click Save on the Standards window
      •  add tag group
        • Items can now be assigned to tags (subscores) at the item level
          • From the item editor select the Standards tab.
          • Begin typing the name of a tag and the system will automatically perform a search.
          • Select the desired tag as shown below
          • assign tag
          • Click Save to save the item with the tag assigned

Create a Test with Subscores in the Test Assembler

  • When you create or edit a test, go tot he Scoring tab and select Enable Subscores AND Automatic Subscores by Tags
  • For Automatic Subscores to function, a test must contain at least three items that have been assigned to the same tag.
  • When the test is saved, a new subscore will be created that uses the name of that tag (e.g. “Addition” and “Geography” in the example shown below).

subscores created

 

Calculating Subscores

After a test has been taken, the main score will be calculated followed by any subscores.

  • If a subscore has a scoring method of None, it will be skipped.
  • Each subscore is calculated by first retrieving the set of items in that subscore and then applying the subscore’s scoring parameters to those items.

A subscore will take into account all of the items inside it, whatever level deep.

  • As an example, recall the Geography subscore from above. The Geography subscore itself did not have any items immediately inside of it. It had a United States subscore, and that subscore in turn had a subscore, State Capitals. The only subscore with items directly in it was the State Capitals subscore. A test with this setup would have three subscores, and they would all work off the same set of items. (If the same item appeared in both the United States subscore and the State Capitals subscore, it would only be counted once in the United States subscore.)

Reporting Subscore Results

Please see the Results Reporting Tab section for details on including subscores on the Results Reports.