This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
eiffel:etf:debugger:start [2020/03/08 21:50] jonathan |
eiffel:etf:debugger:start [2020/03/08 21:55] (current) jonathan |
||
---|---|---|---|
Line 5: | Line 5: | ||
/tmp/student/EIFGENs/chess/W_code/project -b at1.txt | /tmp/student/EIFGENs/chess/W_code/project -b at1.txt | ||
- | When an acceptance tests such as ''at1.txt'' is being developed (or failing), we might wish to use the debugger to determine where our implementation or contracts are failing. To do this, we must execute our code under development (usually in the W-code directory) from the IDE directly. To do this we must set up the **Execution Parameters** (accessible from the **Run** menu). | + | You might also compare your **actual** output with the **expected** output (perhaps provided, or perhaps from an oracle). |
+ | |||
+ | /tmp/student/EIFGENs/chess/W_code/project -b at1.txt > at1.actual.txt | ||
+ | diff at1.actual.txt at1.expected.txt | ||
+ | |||
+ | Of course it is much better to set up the Python script to do regression testing. | ||
+ | |||
+ | However, when an acceptance tests such as ''at1.txt'' is being developed (or failing), we might wish to use the debugger to determine where our implementation or contracts are failing. To do this, we must execute our code under development (usually in the W-code directory) from the IDE directly. To do this we must set up the **Execution Parameters** (accessible from the **Run** menu). | ||
Click on the **Add** tab and provide a name for the profile, e.g. ''at1.txt''. | Click on the **Add** tab and provide a name for the profile, e.g. ''at1.txt''. |