====== Using the debugger with an ETF acceptance test ====== The normal mode of checking an acceptance test is to execute the the test from the command line, e.g.: /tmp/student/EIFGENs/chess/W_code/project -b at1.txt You might also compare your **actual** output with the **expected** output (perhaps provided, or perhaps from an oracle). /tmp/student/EIFGENs/chess/W_code/project -b at1.txt > at1.actual.txt diff at1.actual.txt at1.expected.txt Of course it is much better to set up the Python script to do regression testing. However, when an acceptance tests such as ''at1.txt'' is being developed (or failing), we might wish to use the debugger to determine where our implementation or contracts are failing. To do this, we must execute our code under development (usually in the W-code directory) from the IDE directly. To do this we must set up the **Execution Parameters** (accessible from the **Run** menu). Click on the **Add** tab and provide a name for the profile, e.g. ''at1.txt''. {{:eiffel:etf:debugger:execution.png?840|}} As shown above we must provide **Arguments**. In the above case, I pass in "-b" for ETF batch mode and the location of the acceptance test. in the above case ''../tests/at1.txt'' (the location is relative to the location of the ECF configuration file). If you would like to specify the absolute path of the ''tests'' directory, then double click on the **Working Directory** tab and browse to the folder where the acceptance test is located. In this case it is sufficient to provide arguments ''-b at1.txt'' for the profile. {{:eiffel:etf:debugger:execution2.png?400|}} You can now select that profile and execute it. The results of the acceptance test will be printed to the terminal (from where you invoked **estudio**). If there is an exception along the way, then the IDE will invoke the dagger and you will be able to examine your code for errors. Once you get the acceptance to run without exceptions, remember to add it to your regression test suite.