Differences between revisions 13 and 14
Revision 13 as of 2011-05-28 06:37:26
Size: 9239
Comment: Added instructions for setting the system propertie variables in Eclipse for the Test Suite
Revision 14 as of 2014-01-29 13:02:40
Size: 11435
Comment: Added doc about new event check
Deletions are marked like this. Additions are marked like this.
Line 105: Line 105:
=== true === === Area Tree Checks ===
==== true ====
Line 113: Line 114:
=== eval === ==== eval ====
Line 121: Line 122:
=== element-list === ==== element-list ====
Line 146: Line 147:

=== Event Check ===
It is possible to check that certain [[http://xmlgraphics.apache.org/fop/trunk/events.html|events]] have occurred while processing the tests, and that (some of) its parameters match expected values. For example:
{{{
<event-checks>
  <event key="inlineContainerAutoIPDNotSupported" fallback="300.0"/>
</event-checks>
}}}

The general format is:
{{{
<event key="[key as returned by Event.getEventKey()]" [param1="expected value1" param2="expected value2" ...]/>
}}}

In theory we should check on the event ID rather than its key to avoid any ambiguity, but that would be impractical (the full ID of the event shown in the example is "org.apache.fop.layoutmgr.!BlockLevelEventProducer.inlineContainerAutoIPDNotSupported"...). In practice, the key alone will be enough to distinguish events.

It is possible to specify only the subset of the event's parameters that is of interest. The expected value should be what the {{{toString}}} method called on the parameter would return.

The event checker will process events sequentially. If several events with the same key are expected, then they will be checked in the order given against the sequence of actual events. The checker will not skip one event in an attempt to find a subsequent one that matches. This is to avoid a lack of precision in the event checking. That means that if we want to check one event, then we must also add checks for all the previous events with the same key.

For example, if two "inlineContainerAutoIPDNotSupported" events are expected, one with a fallback value of 300 and one with a fallback value of 100, and we are interested in checking the second one, then the we must still add a check for the first one. Otherwise, the checker has no means to determine whether the test should fail because the first event has a fallback value of 300 while 100 was expected, or whether it should pass because the first event should be ignored and the second one does have a fallback value of 100.

Events that are not checked will be passed on to a regular !EventListener that will typically display the corresponding message on the standard output.

This page describes how to create "Layout Engine Test Cases".

Guidelines

  • Everyone with XSL-FO knowledge can contribute test cases! :-)

  • A test case should be as short as possible. If you stumble upon what you think is a bug or a missing feature, make a copy of your current XSL-FO document and reduce it as much as possible so only the problem is demonstrated. This keeps the whole thing readable and is easier to debug.

  • Only check one problem at a time (or at most one group of problems at a time).
  • You don't necessarily have to write the checks yourself if you contribute a test case to the FOP project. We understand that this alone could be rather difficult. If you can write the checks, then please do, as it saves us some work.
  • Always provide a short info what the test case tries to demonstrate.
  • If possible / sensible stick to the naming convention which is <object name>_<property name>_<property value>_<sequence no>.xml. <object name>, <property name> and <property value> refer to the main objective of the test. Of course, this will not fit all situations and components can be omitted from or added to the test name. For tests refering to FOP Bugzilla entries the suffix _bug<number> is added.

Where are they?

You can find the test cases under the following directory:

test/layoutengine

http://svn.apache.org/viewcvs.cgi/xmlgraphics/fop/trunk/test/layoutengine/

The test cases are divded into groups depending on which resources they need. Test cases that should work against the "out-of-the-box" version of FOP are in test/layoutengine/standard-testcases. Test cases that require hyphenation files are in test/layoutengine/hyphenation-testcases. Further subdirectories may be added in the future.

How do I run them?

The easiest way is to simply run the FOP build process. There's an Ant target called "junit" that runs all of FOP's enabled tests. The target "junit-layout" runs all the layout engine tests and the targets "junit-layout-standard" and "junit-layout-hyphenation" run only the standard and hyphenation tests respectively.

Or you can setup FOP in your favourite Java IDE and run the JUnit tests from there.

Please note that there is a file disabled-testcases.xml in test/layoutengine which contains a list of file names. These are the tests which are by default excluded from the JUnit tests. Tests listed here currently fail and are kept as a reminder for those who fix bugs or implement missing features.

Setting up the Test Suite in your favorite IDE

You can run the test suite from your IDE by setting up a JUnit launch configuration on the org.apache.fop.layoutengine.LayoutEngineTestSuite class. There are a few system properties you can use to control which test cases are run:

  • fop.layoutengine.disabled: Here you can specify the filename for the file that contains the disabled test cases (Normally you should set this to: test/layoutengine/disabled-testcases.xml)

  • fop.layoutengine.single: Lets you specify the name of a single test case to run. (Filename including extension)

  • fop.layoutengine.starts-with: Filters all available test cases to file names that begin with the specified text. ("table" runs all table-related tests)

  • fop.layoutengine.testset: Lets you specify a particular test set to run. A test set called mytests consists of all test files in directory mytests-testcases. Currently FOP contains the test sets "standard" and "hyphenation". If you don't specify this property the "standard" test set is run. There's another special set ("private"): If you create a directory called "private-testcases" you can place your own test cases in there that won't be committed to the SVN repository because "private-testcases" is set to be ignored by SVN.

To make switching between different sets of test cases easier, consider simply adding an "x" or something like that into the system property which simply causes the property not to be found anymore. This essentially disables it.

Eclipse Users

Eclipse users can set the variables from the, "Run Configurations" on the second tab, "Arguments" and in the test box called, "VM Arguments".

Examples:

  • -Dfop.layoutengine.disabled=test/layoutengine/disabled-testcases.xml

  • -Dfop.layoutengine.single=inline_border_padding_block_nested_1.xml

  • -Dfop.layoutengine.starts-with=table

  • -Dfop.layoutengine.testset=standard

Example of a disabled property:

  • -Dxfop.layoutengine.single=inline_border_padding_block_nested_1.xml

How does it work?

The tests are basically XML files with a predefined structure. Here's the raw structure:

<testcase>
  <info>
    <p>
      This test checks <something>.....
    </p>
  </info>
  <variables>
    <img>../../resources/images/bgimg300dpi.jpg</img>
  </variables>
  <fo>
    <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:svg="http://www.w3.org/2000/svg">
      <fo:layout-master-set>

        <!-- etc. etc. -->

        <fo:block-container background-image="##img">

        <!-- etc. etc. -->

    </fo:root>
  </fo>
  <checks>
    <eval expected="0 0 360000 360000" xpath="/areaTree/pageSequence/pageViewport/@bounds" desc="page size"/>
    <true xpath="/areaTree/pageSequence/pageViewport/page[1]"/>
    <true xpath="not(/areaTree/pageSequence/pageViewport/page[2])"/>
    <eval expected="0 0 360000 360000" xpath="/areaTree/pageSequence/pageViewport/page[1]/regionViewport/@rect" desc="region body area"/>
  </checks>
</testcase>
  • The first part ("info") simply contains some information what the test case does.
  • The second part ("variables") is optional and can contain variable definitions for the FO document if you want to avoid a lot of copy/paste. To refer to the variable "img" write "##img" in any attribute inside the FO document (see example above). The variable substitution is done using XSLT.
  • The third part ("fo") contains a full XSL-FO document.
  • The fourth part ("checks") contains all the checks for this test. See below for details.

Now, when you are creating a test case you can use the XSLT stylesheet called "testcase2fo.xsl" if you want to manually run it through FOP (or any other XSL-FO implementation). The stylesheet basically extracts the XSL-FO document from the XML file.

When the JUnit checks for the layout engine are run, the following happens. The checks are extracted from the XML file and each of them is checked against the result from a FOP processing run. Most tests will check against the "Area Tree XML" that is generated by FOP's XMLRenderer. From the command-line you can access it if you use "-at" instead of "-pdf", for example. These checks are normally XPath queries to the area tree XML (see "true" and "eval" checks below).

When you run test cases the generated Area Tree XML is written to the build/test-results/layoutengine directory.

What checks do I have available?

Area Tree Checks

true

Format:

<true xpath="[XPath expression which results in a boolean value (true or false)]" fail-msg="[optional string]"/>

If the XPath expression results to anything else than "true" the test fails. The fail-msg can be used to supply a custom error message when the check fails. The fail-msg attribute is optional.

eval

Format:

<eval expected="[expected value]" xpath="[XPath expression]" tolerance="[optional number]"/>

This is similar to the first check, but here you can specify an expected value the XPath expression has to result into. Generally you get a more descriptive error message if such a test fails than with "true". You can use the optional "tolerance" value to specify a tolerance for number comparisons.

element-list

Format:

<element-list category="[category]" id="[id]" index="[index]">
  (box|penalty|glue|skip)*
</element-list>

This kind of check is mostly for those who know how the Knuth element list approach works. So this is mostly for developers only. With this check you can intercept an element list that is generated during layout.

  • "category" must be one of: "breaker" or "table-cell" (more may be added later)
  • "id" is optional and can be used to identify a specific element list coming from an FO node with the given ID. This works great for table-cells, for example.
  • "index" is also optional and can be used if you don't have an "id" but still more than one element lists of the same category. The index is zero-based.

<box w="[length]"/>

<penalty w="[length]" p="[p]" flagged="[boolean]" aux="[boolean]"/> <!-- p can also be "INF" or "-INF" -->

<glue w="[length]"/> <!-- stretch and shrink are NYI -->

<skip>[integer]</skip> <!-- can be used to skip n elements in the list -->

Event Check

It is possible to check that certain events have occurred while processing the tests, and that (some of) its parameters match expected values. For example:

<event-checks>
  <event key="inlineContainerAutoIPDNotSupported" fallback="300.0"/>
</event-checks>

The general format is:

<event key="[key as returned by Event.getEventKey()]" [param1="expected value1" param2="expected value2" ...]/>

In theory we should check on the event ID rather than its key to avoid any ambiguity, but that would be impractical (the full ID of the event shown in the example is "org.apache.fop.layoutmgr.BlockLevelEventProducer.inlineContainerAutoIPDNotSupported"...). In practice, the key alone will be enough to distinguish events.

It is possible to specify only the subset of the event's parameters that is of interest. The expected value should be what the toString method called on the parameter would return.

The event checker will process events sequentially. If several events with the same key are expected, then they will be checked in the order given against the sequence of actual events. The checker will not skip one event in an attempt to find a subsequent one that matches. This is to avoid a lack of precision in the event checking. That means that if we want to check one event, then we must also add checks for all the previous events with the same key.

For example, if two "inlineContainerAutoIPDNotSupported" events are expected, one with a fallback value of 300 and one with a fallback value of 100, and we are interested in checking the second one, then the we must still add a check for the first one. Otherwise, the checker has no means to determine whether the test should fail because the first event has a fallback value of 300 while 100 was expected, or whether it should pass because the first event should be ignored and the second one does have a fallback value of 100.

Events that are not checked will be passed on to a regular EventListener that will typically display the corresponding message on the standard output.

How to create additional kinds of checks?

You can easily implement new checks by subclassing org.apache.fop.layoutengine.LayoutEngineCheck. When you do that be sure to register the new class in the static initialization block in org.apache.fop.layoutengine.LayoutEngineTester.

HowToCreateLayoutEngineTests (last edited 2014-01-29 13:02:40 by VincentHennebert)