====== Petition (and Response) ====== Below is a response to a [[https://www.change.org/p/sebastian-magierowski-redesign-and-restructure-eecs-3311-software-design-at-york-university-d522bab8-2f9e-4c6b-a30c-6acb5a3612d7?recruiter=778979185&utm_source=share_petition&utm_medium=copylink&utm_campaign=share_petition|change.org student petition:]] "Redesign and restructure EECS 3311: Software Design at York University". Of those students who have signed the petition, it is not clear which students have taken EECS3311 or not --- it appears that the petition can be signed with any email address and name. ---- ==== Preliminaries ==== * Please read [[eiffel:why:|Why Eiffel?]] for an overview of the course goals and design to achieve the course learning outcomes. * See [[:eiffel:why:evaluation:| formal York University student evaluations of EECS3311]] for sections of the course in the Fall of 2018 and the Winter of 2019. These formal evaluations by students taking the course -- contrast significantly with points made the petition. * Results are shown for Question 4 (course materials helped me achieve course objectives), Question 5 (course activities helped me achieve course objectives), Question 6 (course tests/exams were directly related to the course objectives) and Question 7 (the course helped me grow intellectually). * In all cases, the mean for the course is higher than the departmental and faculty means. * In all cases, over 60% of students agree or strongly agree in the positive. ==== Course Learning Outcomes for EECS3311==== Software designers are experts at developing software products that are correct, robust, efficient and maintainable. Correctness is the ability of software products to perform according to specification. Robustness is the ability of a software system to react appropriately to abnormal conditions. Software is maintainable if it is well-designed according to the principles of abstraction, modularity, and information hiding. At the end of the course, students will be able to: * 1. **Specification**: Describe software specifications via Design by Contract, including the use of preconditions, postconditions, class invariants, loop variants and invariants * 2. **Construction**: Implement specifications with designs that are correct, efficient and maintainable. * 3. **Testing**: Develop systematic approaches to organizing, writing, testing and debugging software. * 4. **Analysis**: Develop insight into the process of moving from an ambiguous problem statement to a well-designed solution. * 5. **Architecture**: Design software using appropriate abstractions, modularity, information hiding, and design patterns. * 6. **Tools**: Develop facility in the use of an IDE for editing, organizing, writing, debugging, testing and documenting code including the use of BON/UML diagrams for documenting designs. Also the ability to deploy the software in an executable form. * 7. **Documentation**: Develop the ability to write precise and concise software documentation that also describes the design decisions and why they were made. ---- ===== The Petition ===== ==== Choice of Eiffel ==== > 1. The programming language used: Eiffel. Eiffel is very rarely, if at all, used in industry or academia (outside of York University). As such, there are virtually no resources available, outside of a generic wiki, to help students look at examples and overcome common problems. We believe that Learning the Eiffel language does not enrich the curriculum and provides no benefit to students as candidates in the job industry. Given that we use Eiffel (the method) not merely as a coding tool, but as a **design** tool, it should really be compared to model driven design tools rather than programming tools. Students need to be encouraged to train themselves think above the code level in design. In design, seamless round trip engineering between models (“blueprints”) and code is essential. One is free to suggest alternative design tools, but those should be evaluated on their own merits as design tools. [Aside: For more on this see [[https://youtu.be/-4Yp3j_jk8Q|Video: Thinking Above the Code]], Lesley Lamport Turing award winner 2013: Architects draw detailed blueprints before a brick is laid or a nail is hammered. Programmers and software engineers seldom do. A blueprint for software is called a specification. This talk explains why some sort of specification should be written for any software. We use Lamport's TLA in EEC4312, which has EEC3311 as a prerequisite.] As described in [[eiffel:why#the_eiffel_method_tool_ide|why eiffel: the method and tool]], Eiffel provides solid support for design via specifications (Design by Contract), support for UML design constructs such as true multiple inheritance, and seamless round trip engineering between BON/UML models and code. Below we expand on Design by Construct (DbC). It is true that the Eiffel language is not as commonly used in industry for __coding__ as other alternaties such as Java and C++. However, for the purpose of teaching software design, Eiffel natively supports the construction of executable, mathematical specification: precondition and postcondition for routines, invariant and variant for loops, and invariant for classes. See, for example, [[:eiffel:why#design_by_contract:|why eiffel, Design by Contract]]. In other alternatives, a designer would be forced to use implementation notions such as exceptions, assertions or informal comments. A major advantage of using Eiffel is that it clearly separates between two levels of abstraction: implementation and specification. Given the same (declarative, abstract) specification of an ADT (abstract data type), there exist multiple working (imperative, concrete) implementations which satisfy the specification. In other language alternatives, implementation and specification are not properly distinguished. Given that the course is about software design (as opposed to implementation like in the previous programming and data structure courses), the use of a programming language where there is a clear distinction between implementation and specification is beneficial for students' intellectual growth and future careers. The notion of a specification will remain fundamental to design no matter how technology changes, and is transferable to design using other coding languages. We usually start the course by explaining how design is different from implementation (what students focus on in first year and in EECS2030 and somewhat in EECS2011), and then justify why Eiffel the method is more suitable for learning about design than in languages such as Java that do not provide native support for design constructs. For example, see the following recording of the introductory lecture to design vs. implementation: [[https://www.youtube.com/watch?v=2l2UnYMquuE&list=PL5dxAmCmjv_5O2hx1ARzjI5LQhkX477bw&index=2&t=0s|Introduction to EECS3311]] (starting from 1:05:07 to the end) ==== Eiffel Language Syntax and IDE ==== > 2. The syntactical challenges of both Eiffel and the non-modern IDE being used lends to the vast majority of the course being about the language itself, which is not the purpose of this course. Eiffel’s syntax in particular opposes almost every leading programming language, making knowledge of Eiffel not transferable to other languages. No details are provided as to why the IDE is "non modern". In point of fact, very few IDEs support design at the level of EiffelStudio (i.e. contracting, UML constructs such as multiple inheritance, and roundtrip engineering between models and code). In addition to design constructs, EiffelStudio supports the standard features of any good IDE such as code exploration, debuggers, unit testing, profiling etc. While the Eiffel method is used throughout the course, that is merely as a medium to understand design itself as shown in Fig 2 ([[:eiffel:why#design_principles_covered_in_the_course|why eiffel: Design Principles Covered in the Course]]). The focus of the course is on design principles that are transferable to any setting. We provide an abundance of resources for students to learn about the new Eiffel language. Please browse in this website for textbooks, videos, code snippets and general help such as [[https://www.eiffel.org/documentation]]. For example, in the Winter 2020 term, students were asked to complete [[https://github.com/yuselg/3311-W20-Public/tree/master/Lab0-Spec|Lab0]] which is not counted towards the final grade but is designed to walk students over a variety of syntax and tool basics, in preparation of subsequent lab assignments. Lab0 also introduces students -- informally -- to some of the design principles such as specifications, modularity, information hiding, and class diagrams, and a written tutorial (Eiffel 101) and some introductory tutorial videos to familiarize students with the syntax and workflow of the Eiffel language. Almost all students were able to to submit Lab0 and achieve a perfect correctness score in the first week of the course. Given these resources, it is reasonable to expect a third-year CS or engineering student to learn about the basics necessary for completing the programming/design assignments early on in the course. ==== ETF and Mathmodels Library ==== > 3. EECS 3311 uses proprietary technologies such as ETF and the MathModels library, which again require a substantial learning curve, have no relation to the actual purpose of the course, and not usable in the real world as they are completely proprietary to York University. If students do not know how to use these tools, they could potentially fail the course due to concepts unrelated to the actual course content. Mathmodels and ETF are **not** proprietary technologies. Mathmodels is a an **open source** library for writing complete contracts at the ADT level using predicate logic, sets, functions relations etc. The Mathmodels library is useful for constructing an abstraction function for an application class (e.g., a maze as a graph) that will connect the application logic to its specification (e.g. finding the shortest path in a maze) so that the implementation can be checked against its specification. The maze example was used in Lab3. ETF (Eiffel Testing Framework) is an Eiffel framework that allows instructors to provide students with **open-ended** design problems (i.e. problems that have many different architectural solutions not all of them good). Yet it also allows instructors to tests the feasibility and correctness of the students designs via acceptance tests. This is just one component of evaluating student designs. We are open to better ways to do this assessment for hundreds of students submitting many labs and project (in two phases), but so far this has not been demonstrated. In another component of design assessment, students are required to provide software design documents in which they provide an architectural overview of the system organization (e.g. via class diagrams), a table of all the modules in which students describe the responsibilities and secret of the modules (to assess whether the design has appropriate abstraction and information hiding), a description of all the design decisions, the significant contracts in the design to ensure its correctness, and a summary of their testing procedures. See [[;eiffel:why#system_design_document_sdd|why eiffel: System Design Documents]]. The theoretical and pedagogical contributions of ETF is reported in the a model-driven engineering conference: https://ieeexplore.ieee.org/document/8501488. ==== Grading Criteria ==== > 4. The marking scheme does not properly reflect students' understanding of the course’s core concepts. This problem is due to numerous issues. Most importantly, students are tested on their knowledge of Eiffel syntax and the implementation of particular algorithms instead of their understanding of Software Design Patterns. Additionally, students may get a failing grade while passing all test cases if they do not pass the instructors' 'hidden' test cases, which can cause many issues with grading. Particularly, this leads to the students attempting to create many test cases, which are not necessarily effective. Many students in this course have not taken any software testing courses as of yet. In effect, students are being told to test their software with very little knowledge of proper testing patterns. This course is not a Software QA or testing course, the heavy focus on software testing is mis-placed. The petition does not reflect how evaluation is actually done via checks of design feasibility and correctness, and written software design documents describing the organization of the components. Below is a summary of a typical evaluation scheme. The petition omits some significant details. ----- === Evaluation of student designs=== * Lab0 (not graded) * Overview of the Eiffel language, method, and tool. See Lab0 (discussed above) where design principles such as modularity, information hiding, and class diagrams are discussed. This is also where students develop skills in using a unit testing framework specialized for design. * Lab1 * Implementing graph algorithms that students must construct to satisfy provided formal specifications. * Evaluation * Tests that check that the implementation satisfies the provided specification (this is done using the unit testing framework). * Students have now developed familiarity with the Eiffel language and IDE and some introductory design principles. The focus now changes to design in more depth. * Lab2 * Develop more Graph Algorithms but at the mathematical ADT level using Mathmdodels for complete specifications. * Evaluation: Tests that check that the implementation satisfies the specification and tests that check that the **specifications themselves are correct**. The unit testing framework is used to do this. * Lab3 * Design a maze game using graphs (from Lab1 and Lab2) as the abstract model. * Evaluation: * **Acceptance** tests for design feasibility and correctness *Design document: Students submit High-level Class Diagrams * Labtest1 * Evaluation of an Iterator Design Pattern * Project phase1 * Open ended design of a galaxy simulation game * Evaluation: * Acceptance tests for design feasibility and correctness * Design document: High-level Class Diagrams annotated with contracts * Project phase2 * New features added to the galaxy simulation game to check software design maintainability. * Evaluation: * Acceptance tests for design feasibility and correctness * complete software design document (see [[:eiffel:why#system_design_document_sdd|SDD]]). * A software engineering student who completed the course wrote as follows: "//the Project is one of the most important parts of the entire SE curriculum, since it is the only major programming project in 3rd year, similar to the 2311 one, and I would say I took more out of that assignment than any other project I’ve worked on (aside from capstone). It is difficult to create such a complete software design & development scenario, and efforts to shrink the workload through the inclusion of ETF definitely help try to make the process as ’small’ as it could be//". * The workload is very high by comparison with other 3 credit courses in large part due to the project, but removing the project would also remove this learning outcome. * Labtest2 * Evaluation of an unbounded undo/redo design pattern * Written Exam * Comprehensive questions including small designs of design principles such as modularity, information hiding, specifications and design patterns. ----- Second year courses introduce students to testing and the use of unit testing frameworks (such as JUnit) -- and it is thus expected that students have these skills. This is not the same thing as an in-depth testing course (which is not needed for EECS3311). But unit testing is far from the sole evaluation mechanism in the course. We use a variety of evaluation mechanisms including the evaluation of design feasibility & correctness via comprehensive static checks at compile time and dynamic tests (using a variety of unit, specification and acceptance tests) as well as the evaluation of student software design documents (see [[ eiffel/why#system_design_document_sdd|here]] for the detail). While various static and dynamic checks are important, so are written reports describing and justifying the organization of the design. The actual grading scheme comprehensively evaluates student designs in variety of ways. Here is an important point to consider in assessing designs. **Is the design feasible**? **Is the design correct**? === Fig. 1: Evaluating Design Feasibility and Correctness === Below is a UML description of a design using a well-respected **design** tool in a software design course elsewhere: {{:eiffel:why:petition:uml-consistency.png?500|}} The tool does not flag that there are problems (obvious to spot in this case given that the UML class diagram is simple). (a) Two classes should not have the same name (e.g. name C occurs twice). (b) more seriously, class B inherits from class A which inherits from class C which (interface) inherits from class B. This is circular and thus an **inconsistency** that makes the design **infeasible**, But the tool does not flag this. What this means is that we seek a tool that will flag these design problems. Even if the tool can do so, and perhaps generate templating code, another problem arises. Suppose the program text is changed (including changes in design). Will that be reflected back into the UML model. This is where roundtrip engineering between models and code becomes very significant. The Eiffel IDE supports this kind of design. It is also not surprising that instructors will use a variety of static and dynamic tests to grade student designs (not all of them given in advance of the submission). This is to encourage students to constantly regression test their own designs -- which by the time they reach 3rd year is not an unreasonable expectation. Without a suitable model driven design tool that ensure the consistency of different views of the software (e.g. contract view, class diagram view, etc.), a putative design might just be a set of meaningless bubbles and arrows. For more on this, see the footnote on the **single model principle**. ((For more on the **single model principle** in publications, see [[https://www.eecs.yorku.ca/~jonathan/publications/2002/SingleModel.pdf|here]] and [[https://www.eecs.yorku.ca/~jonathan/publications/2007/TOSEM-2007.pdf|here]])) ----- Labs and project assignments have two perspectives to be assessed: correctness of design and the architecture of design. The former assessment is performed via a number of automated unit and/or acceptance tests. Like any other programming language, in order to execute a test, the programmer ought to have proper knowledge of the program syntax and the implementation of algorithms. In the case of an ETF lab or end-of-semester project, students are at a position to judge whether a design pattern learned from class is applicable to solving the problem at hand (and such open-ending nature is indeed what a design is). Each lab or project comes with a starter project which includes clear instructions and a small number of basic tests. If your program does not produce the correct output even for the basic cases, then why should the students be awarded a passing mark? For a third-year student in CS or CE/SE, would it not be a reasonable assumption that they submit a piece of software that goes beyond compilation and basic tests? Whereas this is not a course on software testing techniques, it is arguably reasonable to expect students to write additional test cases which cover the missing classes of inputs (e.g., boundary values). Given that this course is also meant to train students with the mindset for their future career: in a workspace, would it be reasonable for your supervisor to inform you, upon assigning you a mission, what exactly they will check on your submitted work? Can you hold your supervisor responsible when they identify a flaw in the correctness of your implementation? ==== Learning Design by Contract in Eiffel ==== > 5. EECS3311 places a heavy focus on design by contract. Apart from brief mentions, Test-Driven design or any other type of software design is not explored in depth. This heavy focus on design-by-contract is used as a justification for using the Eiffel language. However, we believe that students should also be able to learn to implement design by contract without explicit contracts - doing so should give students the ability to take away from the design by contract ideology and implement it in the real world with industry-standard programming languages. By the phrase "Test-Driven __design__", the students probably mean TDD ([[https://en.wikipedia.org/wiki/Test-driven_development#xUnit_frameworks|Test Driven Development]]), which relies on [[https://en.wikipedia.org/wiki/Test-driven_development#xUnit_frameworks|unit testing frameworks]]. Now unit testing is introduced right at the beginning in Lab0 and used throughout the course; in fact some combination unit testing, specification testing in design, or acceptance testing is used in every lab and the project. TDD is described early on and used repeatedly with the stress on regression testing of design feasibility and correctness as new features are introduced in the design. What is also perplexing is that in #4 of the petition, the complaint is that students have not studied testing, and "students may get a failing grade while passing all test cases if they do not pass the instructors' 'hidden' test cases, which can cause many issues with grading." The hidden tests are precisely there to ensure that students apply the regression testing aspect of TDD. The petition also claims that "other types of software design is not explored in depth", but this is not so as mentioned earlier and described in Design Principles mentioned in [[http://seldoc.eecs.yorku.ca/doku.php/eiffel/why#design_principles_covered_in_the_course|why eiffel: Fig 1]]. Many other design principles are covered in depth and applied in many design patterns (covered throughout the course in depth). Ada and Eiffel provide native language support for Design by Contract, but the principle that **specifications** are integral to design carries over to designing in any language. The Unified Modeling Language (UML) is the //de facto// modelling language in software engineering thus supports Design by Contract in the form of OCL (Object Constraint Language). For other methods of specification, see [[https://youtu.be/-4Yp3j_jk8Q|Video: Thinking Above the Code]] and [[https://youtu.be/GMQMzk3DZug|Specifications in Industry]] at Amazon, Google and Facebook. ==== Learning Software Design in Eiffel ==== > 6. In summary, the course does not focus on Software Design, in contradiction with its name. Instead, it focuses on gaining knowledge of the semantics of proprietary, obsolete tools and heavy software testing (again - with students that do not know proper test plans and testing patterns). In summary, the course focuses on many foundational aspects of design, and a sequence of labs and a significant project allow students to exercise these various design principles. As mentioned above, student evaluation in the past have been relatively positive.