Instructors of programming courses must manage a variety of pedagogical dependencies in their teaching materials. For instance, updating the code used in a single lesson can require cascading changes to other lessons in the course. Currently, they must manually maintain these dependencies across many files, which is tedious and error-prone. To help instructors track pedagogical code dependencies, we created a system called Codehound that uses static analysis to automatically detect where functions are introduced and reused through an entire course. To show how Codehound can be used, we present three usage scenarios inspired by our own experiences teaching large data science courses. These scenarios demonstrate how Codehound can help instructors create new content, collaborate with staff to refactor existing content, and estimate the cost of future course changes.
First year computer science (CS) courses have mean failure rates as high as 30.3% [13]. In an attempt to identify and mitigate potential contributing factors to this problem, this study aims to investigate how the understanding of abstraction impacts students’ programming ability and subsequent success in a first-year data structures course. Specifically, we employ the use of videos to explicitly introduce the concept of abstraction and assess understanding through quizzes directly related to concrete programming exercises. Our work is motivated and guided by related work on abstract thinking as it relates to the skillset of a computer scientist, in addition to existing work on the introduction of abstraction as a learning outcome in computer science education. We measure the students’ understanding of abstraction through a series of short weekly quizzes tightly tied to graded programming exercises. Through our analysis we identify specific topics in the introductory CS course that present abstraction difficulties for students, and suggest potential reasons that these topics are particularly challenging. We also evaluate the students’ learning experience when taught abstraction explicitly, discussing both successes and areas in need of improvement. Finally, we recommend introducing abstraction into the early CS curriculum as an explicit learning outcome and treating the topic as a persistent theme throughout courses in order to support students’ understanding of foundational programming.
Existence of thresholds in learning has been long recognized. Their nature has been well characterized, and it is recognized that they need to be treated differently from other core concepts. Helping learners cross thresholds has been identified as one of the challenges of course and curriculum design, and best practices for integrating thresholds into the classroom via active learning have been identified. Threshold concepts have been recognized to exist in computing education as well. In this paper we examine two difficult concepts in software development: model selection and substitutability. By treating them as threshold concepts and applying the recommended practices, we have succeeded in spiraling them into a course, with suitable scaffolds, over a period of several weeks. In the course of doing this, we developed examples that enable students to find relevance through a broader view of substitutability, and also developed a novel approach to writing requirements that helped students understand why FSM models are preferred in some situations.
An important learning outcome in software engineering education is the ability to write an effective test suite that rigorously tests a target application. The standard approach for assessing test suites is to check coverage which can be problematic because coverage rewards code invocation, without checking test assertion correctness.
Mutation Analysis (injecting a small fault into a clone of a codebase) has been used in both industry and academia to check test suite quality. A mutant is killed if any tests in the test suite fail on the clone. More mutants killed indicates a stronger suite, as it is more sensitive to defects. Mutation Analysis has been limited in an educational setting because of the prohibitive cost in both time and compute power to run the students’ suites over all generated clones.
We employed Mutation Analysis to assess test suite quality in our upper-year Software Engineering course at a large research intensive university. This paper makes two contributions: (1) We show that it is feasible and effective to use a small sample of hand-written mutants for grading, and (2) We assess effectiveness for promoting student learning by comparing students graded with coverage to those graded with Mutation Analysis.
We found that mutation graded students write more correct tests, check more of the behaviour of invoked code, and more actively seek to understand the project specification.
Data Science practices are increasingly leveraged in disparate domains of research, whether as part of industry workflows, governmental department initiatives, or open problems within academic communities. Herein, we describe designing term-projects to introduce senior undergraduate students to applied Data Science research for industry, governmental, or academic "clients" through a series of course assignments and client meetings. We outline the lessons learned and describe how they may be adapted within similar courses. Students are familiarized with data science best practices, obtain applied research experience, and (potentially) professionally benefit from an actual research contribution in the form of a peer-reviewed conference publication; at time of writing, we have published three student-led projects in the proceedings of eminent peer-reviewed conferences. We highly recommend introducing undergraduate students to such client-serving research applications early in their program to encourage them to consider pursuing a research-focused career path.
Expressions are the building blocks of formal languages such as lambda calculus as well as of programming languages that are closely modeled after it. Although expressions are also an important part of programs in languages like Java, that are not primarily functional, teaching practices typically don’t focus as much on expressions.
We conduct both a theoretical analysis of the Java language, as well as an empirical analysis of the use of expressions in Java programs by novices, to understand the role expressions play in writing programs. We then proceed by systematically analyzing teaching materials for Java to characterize how they present expressions.
Our findings show that expressions are an essential construct in Java, that they are prevalent in student code, but that current textbooks do not introduce expressions as the central, general, and compositional concept they are.
Teamwork is integral to any computer science curriculum because it provides students with experiences that mirror the industry. Some students are resistant to working in teams because of the perceived inequities. Assessing individual participation and team dynamics can provide faculty with valuable information to design and deploy interventions to improve students’ teamwork skills and help dysfunctional teams.
This work looks at the team harmony experience of pairs in a large (300 person) third-year Software Engineering class in a North American research-intensive university. For the last seven semesters, we asked students to regularly report their sense of equity relating to their contributions to group discussions, influence over task assignments, and overall contributions to their course project development.
Based on our analyses, four periods emerged: prior to COVID-19, during the transitional period as restrictions were applied due to the pandemic, during COVID-19, and after the acute COVID-19 period ended and restrictions were lifted. Overall, we saw that students experienced a decrease in team harmony during the transition to lockdown and that harmony recovered in subsequent semesters, with some measures gradually trending worse over time in the post-pandemic period (once the restrictions were lifted).
Program design should be taught with a comprehensible guideline and appropriate tool support. While Felleisen et al.'s program design recipe serves as a good guideline for novice learners, no existing tool provides sufficient support for step-by-step design. We propose Mio, an environment for designing programs based on the design recipe. In Mio, the programmer uses blocks to express design artifacts, such as examples of input and output data. The system checks the consistency of the design, gives feedback to the programmer, and produces a half-completed program for use in steps after designing. A preliminary experiment in the classroom showed its ability to make program design easier for novices, and to encourage programmers to follow the design recipe. In this paper, we demonstrate the core features of Mio, report the results of the experiment, and discuss our plans for extensions.