Below is a response to a change.org student petition: “Redesign and restructure EECS 3311: Software Design at York University”.
Of those students who have signed the petition, it is not clear which students have taken EECS3311 or not — it appears that the petition can be signed with any email address and name.
Software designers are experts at developing software products that are correct, robust, efficient and maintainable. Correctness is the ability of software products to perform according to specification. Robustness is the ability of a software system to react appropriately to abnormal conditions. Software is maintainable if it is well-designed according to the principles of abstraction, modularity, and information hiding. At the end of the course, students will be able to:
1. The programming language used: Eiffel. Eiffel is very rarely, if at all, used in industry or academia (outside of York University). As such, there are virtually no resources available, outside of a generic wiki, to help students look at examples and overcome common problems. We believe that Learning the Eiffel language does not enrich the curriculum and provides no benefit to students as candidates in the job industry.
Given that we use Eiffel (the method) not merely as a coding tool, but as a design tool, it should really be compared to model driven design tools rather than programming tools. Students need to be encouraged to train themselves think above the code level in design. In design, seamless round trip engineering between models (“blueprints”) and code is essential. One is free to suggest alternative design tools, but those should be evaluated on their own merits as design tools.
[Aside: For more on this see Video: Thinking Above the Code, Lesley Lamport Turing award winner 2013: Architects draw detailed blueprints before a brick is laid or a nail is hammered. Programmers and software engineers seldom do. A blueprint for software is called a specification. This talk explains why some sort of specification should be written for any software. We use Lamport's TLA in EEC4312, which has EEC3311 as a prerequisite.]
As described in why eiffel: the method and tool, Eiffel provides solid support for design via specifications (Design by Contract), support for UML design constructs such as true multiple inheritance, and seamless round trip engineering between BON/UML models and code. Below we expand on Design by Construct (DbC).
It is true that the Eiffel language is not as commonly used in industry for coding as other alternaties such as Java and C++. However, for the purpose of teaching software design, Eiffel natively supports the construction of executable, mathematical specification: precondition and postcondition for routines, invariant and variant for loops, and invariant for classes. See, for example, why eiffel, Design by Contract. In other alternatives, a designer would be forced to use implementation notions such as exceptions, assertions or informal comments. A major advantage of using Eiffel is that it clearly separates between two levels of abstraction: implementation and specification. Given the same (declarative, abstract) specification of an ADT (abstract data type), there exist multiple working (imperative, concrete) implementations which satisfy the specification. In other language alternatives, implementation and specification are not properly distinguished.
Given that the course is about software design (as opposed to implementation like in the previous programming and data structure courses), the use of a programming language where there is a clear distinction between implementation and specification is beneficial for students' intellectual growth and future careers. The notion of a specification will remain fundamental to design no matter how technology changes, and is transferable to design using other coding languages.
We usually start the course by explaining how design is different from implementation (what students focus on in first year and in EECS2030 and somewhat in EECS2011), and then justify why Eiffel the method is more suitable for learning about design than in languages such as Java that do not provide native support for design constructs. For example, see the following recording of the introductory lecture to design vs. implementation: Introduction to EECS3311 (starting from 1:05:07 to the end)
2. The syntactical challenges of both Eiffel and the non-modern IDE being used lends to the vast majority of the course being about the language itself, which is not the purpose of this course. Eiffel’s syntax in particular opposes almost every leading programming language, making knowledge of Eiffel not transferable to other languages.
No details are provided as to why the IDE is “non modern”. In point of fact, very few IDEs support design at the level of EiffelStudio (i.e. contracting, UML constructs such as multiple inheritance, and roundtrip engineering between models and code). In addition to design constructs, EiffelStudio supports the standard features of any good IDE such as code exploration, debuggers, unit testing, profiling etc.
While the Eiffel method is used throughout the course, that is merely as a medium to understand design itself as shown in Fig 2 (why eiffel: Design Principles Covered in the Course). The focus of the course is on design principles that are transferable to any setting.
We provide an abundance of resources for students to learn about the new Eiffel language. Please browse in this website for textbooks, videos, code snippets and general help such as https://www.eiffel.org/documentation.
For example, in the Winter 2020 term, students were asked to complete Lab0 which is not counted towards the final grade but is designed to walk students over a variety of syntax and tool basics, in preparation of subsequent lab assignments. Lab0 also introduces students – informally – to some of the design principles such as specifications, modularity, information hiding, and class diagrams, and a written tutorial (Eiffel 101) and some introductory tutorial videos to familiarize students with the syntax and workflow of the Eiffel language. Almost all students were able to to submit Lab0 and achieve a perfect correctness score in the first week of the course. Given these resources, it is reasonable to expect a third-year CS or engineering student to learn about the basics necessary for completing the programming/design assignments early on in the course.
3. EECS 3311 uses proprietary technologies such as ETF and the MathModels library, which again require a substantial learning curve, have no relation to the actual purpose of the course, and not usable in the real world as they are completely proprietary to York University. If students do not know how to use these tools, they could potentially fail the course due to concepts unrelated to the actual course content.
Mathmodels and ETF are not proprietary technologies.
Mathmodels is a an open source library for writing complete contracts at the ADT level using predicate logic, sets, functions relations etc. The Mathmodels library is useful for constructing an abstraction function for an application class (e.g., a maze as a graph) that will connect the application logic to its specification (e.g. finding the shortest path in a maze) so that the implementation can be checked against its specification. The maze example was used in Lab3.
ETF (Eiffel Testing Framework) is an Eiffel framework that allows instructors to provide students with open-ended design problems (i.e. problems that have many different architectural solutions not all of them good). Yet it also allows instructors to tests the feasibility and correctness of the students designs via acceptance tests. This is just one component of evaluating student designs. We are open to better ways to do this assessment for hundreds of students submitting many labs and project (in two phases), but so far this has not been demonstrated.
In another component of design assessment, students are required to provide software design documents in which they provide an architectural overview of the system organization (e.g. via class diagrams), a table of all the modules in which students describe the responsibilities and secret of the modules (to assess whether the design has appropriate abstraction and information hiding), a description of all the design decisions, the significant contracts in the design to ensure its correctness, and a summary of their testing procedures. See why eiffel: System Design Documents.
The theoretical and pedagogical contributions of ETF is reported in the a model-driven engineering conference: https://ieeexplore.ieee.org/document/8501488.
4. The marking scheme does not properly reflect students' understanding of the course’s core concepts. This problem is due to numerous issues. Most importantly, students are tested on their knowledge of Eiffel syntax and the implementation of particular algorithms instead of their understanding of Software Design Patterns. Additionally, students may get a failing grade while passing all test cases if they do not pass the instructors' 'hidden' test cases, which can cause many issues with grading. Particularly, this leads to the students attempting to create many test cases, which are not necessarily effective. Many students in this course have not taken any software testing courses as of yet. In effect, students are being told to test their software with very little knowledge of proper testing patterns. This course is not a Software QA or testing course, the heavy focus on software testing is mis-placed.
The petition does not reflect how evaluation is actually done via checks of design feasibility and correctness, and written software design documents describing the organization of the components. Below is a summary of a typical evaluation scheme. The petition omits some significant details.
Second year courses introduce students to testing and the use of unit testing frameworks (such as JUnit) – and it is thus expected that students have these skills. This is not the same thing as an in-depth testing course (which is not needed for EECS3311).
But unit testing is far from the sole evaluation mechanism in the course. We use a variety of evaluation mechanisms including the evaluation of design feasibility & correctness via comprehensive static checks at compile time and dynamic tests (using a variety of unit, specification and acceptance tests) as well as the evaluation of student software design documents (see here for the detail). While various static and dynamic checks are important, so are written reports describing and justifying the organization of the design.
The actual grading scheme comprehensively evaluates student designs in variety of ways.
Here is an important point to consider in assessing designs. Is the design feasible? Is the design correct?
Below is a UML description of a design using a well-respected design tool in a software design course elsewhere:
The tool does not flag that there are problems (obvious to spot in this case given that the UML class diagram is simple). (a) Two classes should not have the same name (e.g. name C occurs twice). (b) more seriously, class B inherits from class A which inherits from class C which (interface) inherits from class B. This is circular and thus an inconsistency that makes the design infeasible, But the tool does not flag this.
What this means is that we seek a tool that will flag these design problems. Even if the tool can do so, and perhaps generate templating code, another problem arises. Suppose the program text is changed (including changes in design). Will that be reflected back into the UML model. This is where roundtrip engineering between models and code becomes very significant. The Eiffel IDE supports this kind of design.
It is also not surprising that instructors will use a variety of static and dynamic tests to grade student designs (not all of them given in advance of the submission). This is to encourage students to constantly regression test their own designs – which by the time they reach 3rd year is not an unreasonable expectation.
Without a suitable model driven design tool that ensure the consistency of different views of the software (e.g. contract view, class diagram view, etc.), a putative design might just be a set of meaningless bubbles and arrows. For more on this, see the footnote on the single model principle. 1)
Labs and project assignments have two perspectives to be assessed: correctness of design and the architecture of design. The former assessment is performed via a number of automated unit and/or acceptance tests. Like any other programming language, in order to execute a test, the programmer ought to have proper knowledge of the program syntax and the implementation of algorithms. In the case of an ETF lab or end-of-semester project, students are at a position to judge whether a design pattern learned from class is applicable to solving the problem at hand (and such open-ending nature is indeed what a design is).
Each lab or project comes with a starter project which includes clear instructions and a small number of basic tests. If your program does not produce the correct output even for the basic cases, then why should the students be awarded a passing mark?
For a third-year student in CS or CE/SE, would it not be a reasonable assumption that they submit a piece of software that goes beyond compilation and basic tests? Whereas this is not a course on software testing techniques, it is arguably reasonable to expect students to write additional test cases which cover the missing classes of inputs (e.g., boundary values). Given that this course is also meant to train students with the mindset for their future career: in a workspace, would it be reasonable for your supervisor to inform you, upon assigning you a mission, what exactly they will check on your submitted work? Can you hold your supervisor responsible when they identify a flaw in the correctness of your implementation?
5. EECS3311 places a heavy focus on design by contract. Apart from brief mentions, Test-Driven design or any other type of software design is not explored in depth. This heavy focus on design-by-contract is used as a justification for using the Eiffel language. However, we believe that students should also be able to learn to implement design by contract without explicit contracts - doing so should give students the ability to take away from the design by contract ideology and implement it in the real world with industry-standard programming languages.
By the phrase “Test-Driven design”, the students probably mean TDD (Test Driven Development), which relies on unit testing frameworks. Now unit testing is introduced right at the beginning in Lab0 and used throughout the course; in fact some combination unit testing, specification testing in design, or acceptance testing is used in every lab and the project. TDD is described early on and used repeatedly with the stress on regression testing of design feasibility and correctness as new features are introduced in the design.
What is also perplexing is that in #4 of the petition, the complaint is that students have not studied testing, and “students may get a failing grade while passing all test cases if they do not pass the instructors' 'hidden' test cases, which can cause many issues with grading.” The hidden tests are precisely there to ensure that students apply the regression testing aspect of TDD.
The petition also claims that “other types of software design is not explored in depth”, but this is not so as mentioned earlier and described in Design Principles mentioned in why eiffel: Fig 1. Many other design principles are covered in depth and applied in many design patterns (covered throughout the course in depth).
Ada and Eiffel provide native language support for Design by Contract, but the principle that specifications are integral to design carries over to designing in any language. The Unified Modeling Language (UML) is the de facto modelling language in software engineering thus supports Design by Contract in the form of OCL (Object Constraint Language). For other methods of specification, see Video: Thinking Above the Code and Specifications in Industry at Amazon, Google and Facebook.
6. In summary, the course does not focus on Software Design, in contradiction with its name. Instead, it focuses on gaining knowledge of the semantics of proprietary, obsolete tools and heavy software testing (again - with students that do not know proper test plans and testing patterns).
In summary, the course focuses on many foundational aspects of design, and a sequence of labs and a significant project allow students to exercise these various design principles. As mentioned above, student evaluation in the past have been relatively positive.