Abstracts for Lightning Talks and Posters

Mathew Hillier and Andrew Fluck. An off-line e-assessment platform for computer science education

University students typically learn and complete formative assessment using twenty first century tools of the trade. This includes multimedia resources, databases, software development tools, simulations and a range of specialist software applications. However, when it comes to high stakes exams students are mostly forced to demonstrate their expertise using pen-on-paper. This dichotomy is particularly thrown into stark relief in computing courses. Student frustration with paper based exams was demonstrated by a university-wide survey carried out in 2014 in Australia. This sought the opinions of the student body on the idea of e-exams. The computing disciplines expressed a stronger agreement that was statistically significant compared to the average when asked if they thought exams should be computerised.

Difficulties also arise when students are located in remote regions where access to the Internet can be severely limited. This can make studies in computing courses problematic. The combination of these factors results in an inconsistent set of tools and resources being available between on-campus formative learning, remote learners and in high stakes assessment.

We will explore the design for an assessment platform that can be used for both formative and summative e-assessment utilising bring-your-own laptops. Such a platform provides a consistent tool set and learning environment to all candidates in high stakes exams, on-campus formative assessment and for formative learning while off-campus in remote and isolated regions. The platform is the subject of a national project, and raises the possibility of wide scale curriculum reform incorporating computers more deeply into student learning by leveraging modern assessment practices.

Miguel Angel Rubio. Automatic Categorization of Introductory Programming Students Using Programming Exercise Data

Learning to program can be quite difficult for introductory programming students. They must master language syntax, programming theory and problem solving techniques in a short period of time. As a consequence a significant percentage of students struggle to successfully complete CS1. To make teaching this course even more challenging there has been a large increase in students taking introductory programming. CS1 courses with several hundred students are common nowadays.

Several studies have shown that the first weeks of CS1 are critical. Students that do poorly during the first weeks tend to fail at the end of the course. Detecting which students are struggling and offering them additional help would probably increase the pass rate. If we are working with large courses automatic detection methods must be employed.

The approach followed in this study is to analyze students exercise data using cluster analysis to assess the students’ learning stage. We have used two sets of exercises based on the neo- Piagetian model [1]. Students completed one set at mid-course and the other set at the end of the course. We have analyzed the data obtained using cluster techniques and have been able to automatically classify students into different learning stages in both cases. Most students classified into the lowest stage at mid-course remained in that stage at the end of the course.

[1] Lister, R. 2011. Concrete and other neo-Piagetian forms of reasoning in the novice programmer. Proceedings of the Thirteenth Australasian Computing Education Conference- Volume 114 (2011), 9–18.

Miranda Parker. A K–12 CS Framework: Creating a Foundation for the Future for All Students

This lightning talk introduces and solicits feedback on the recently released K–12 Computer Science (CS) Framework, a collaborative effort in the United States. The K–12 CS Framework describes what all students need to know and be able to do in computer science. It represents a vision in which all students, from a young age, engage in the powerful ideas in computer science, becoming equipped to develop new approaches to problem solving while becoming producers of computing technologies.

The framework is a collaborative effort, developed by the Association for Computing Machinery, the Computer Science Teachers’ Association, Code.org, the Cyber Innovation Center, and the National Math and Science Initiative, along with more than 100 advisors within the U.S. computing community, several U.S. states, large school districts, technology companies, and other organizations. The document is intended to provide high-level conceptual guidance to state or school district creating a K–12 pathway in computer science. It can assist in the development of standards, curriculum, instruction, or teacher professional development.

This lightning talk will introduce the K–12 CS Framework to the audience with an overview of the contents of the framework and why it is important. Additionally, the audience will be solicited for feedback on the framework. Questions that will be asked of the audience include: 1) What would a multi-national CS framework look like; 2) What other research can be used to support the framework; 3) What a future research agenda based on the framework might look like; and 4) What changes could be made to the framework to increase equitable access to computer science. In essence, this lightning talk will aim to spark discussion about the K–12 CS Framework with an international computing education research audience to gather insights and perspectives to inform future K–12 CS education research.

Nickolas Falkner and Claudia Szabo. Identifying Clear Signals In Student On-line Discussion Activity

Observing what students do richly informs our learning design and practice [1]. Recording student discussion for later analysis has become easier as discussions migrate to learning management systems and Massive Open On-line Courses (MOOCs). We can make well-defined statements about the information contributed by the timing or frequency of a student response, and this scales well, but it is less clear what is being conveyed when we explore discussion themes and semantics.

We need to identify how much information is sufficient to assess the effect of an intervention or to provide a basis for analysis. Does population size convey the same degree of certainty when we are examining contributions to topics, activity relative to the cohort, or any form of natural language processing? We have clear guidance on the limits of signal processing from Shannon but we do not have clear guidelines on how we interpret, or even construct, the “bits” of information represented by student contribution to topics. We want to identify clear signals from the general discussion noise, making use of mutual information and entropy to enhance interpretation and reasoning about student behaviour. A clear indication of student behaviour in one area can then provide grounds for reasoning as to how they may behave elsewhere and, potentially, why they have changed behaviour.

We are analysing the discussion forums from face-to-face and MOOC communities, using statistical and information theory, machine learning, topic modelling and NLP to attempt to define how we could provide answers to the question “How much and what type of information do I need from these students to sufficiently understand and predict their behaviour.”

[1] Martin, Edwards, and Shaffer. 2015. The Effects of Procrastination Interventions on Programming Project Success. In Proceedings of the eleventh annual International Conference on International Computing Education Research (ICER ’15). ACM.

Lightning Talk Only

Cullen White and Joseph Wilson. Exploring Computer Science Fellows: Ensuring Computer Science For All Through Teacher & School Leader Advocacy

“Computer Science For All” represents a monumental opportunity to ensure that all students – especially low-income students, students of color, female students, and students with intersections of those identities receive access to high quality Computer Science (CS) education. In the United States, a disconnect exists between the high demand among low-income students and their parents for access to CS and the perceptions of school leaders, who may underestimate this demand or possess a lack of coherent strategies for meeting it [1].

Exploring Computer Science Fellows, a program created by Teach For America (TFA) in collaboration with Exploring Computer Science (ECS), addresses these challenges [2]. TFA is the corps of outstanding recent college graduates and professionals who commit to teach for at least two years in low-income urban and rural public schools. TFA placement schools serve students historically underrepresented in computing by socioeconomic status (>80% eligible for free/reduced price lunch) and racial/ethnic backgrounds (>80% identify as African American or Latino/a). ECS is a National Science Foundation- funded and research-based teacher professional development program focused on creating culturally responsive high school CS classrooms.

Specifically, TFA is recruiting 80+ teachers in 10+ geographic regions to teach ECS, helping school administrators prioritize CS instruction, and increasing access to CS to a diverse group of students. TFA expects to reach 2,000+ students who are traditionally underrepresented in CS. Additionally, this project has discovered significant CS teaching interest among pre-service teacher candidates, with 1400+ applicants accepted to TFA indicating their desire to teach CS.

[1] “Searching for Computer Science: Access and Barriers in U.S. K–12 Education”. Internet: http://services.google.com/fh/files/misc/searching-for-computer-science_report.pdf [April 12, 2016]

[2] M. Moritz, J. Wilson. “Helping high-needs schools prioritize CS education through teacher advocacy and experiences”. ACM Inroads, vol.6, pp. 73–74, 2015.

Joy Gasson, Patricia Haden, Dale Parsons and Krissi Wood. Flying or Flailing: A Study of Affect when Learning to Program

Learning to program elicits strong emotional responses, both positive and negative. As lecturers we are interested in understanding how this emotional response impacts on the learning experience. Recently, in the final exam for our two first-year programming papers, we have included the question: “draw, sketch or otherwise depict what programming means to you”. The question was initially intended as light relief for students at the end of a long exam. However, we have been surprised not only at the time and effort the students take with their drawings but also at the strength of emotional reaction that they illustrate – from ecstasy and power to frustration and helplessness. We anticipate that these drawings will be a rich source of data for future studies.

CS Education literature focusses primarily on pedagogical technique, rather than emotional experience. There are few available tools for measuring affect quickly and easily. Thus, we are developing an online survey tool which attempts to explore the range of student emotions using a simple, structured metric. The tool presents a series of two-dimensional graphs where the axes are pairs of related subjective constructs, for example, the perceived familiarity of a programming problem, and the extent to which the student feels he or she has a clear plan for solving that problem. Students indicate their feelings by clicking a location in the two- dimensional area defined by each pair of axes. Students used the tool at the end of each class session throughout one semester in 2016. The results have been surprisingly rich, providing insights into the quality of course materials, student frustration levels, differences between students of different skill levels, and students at risk.

In this talk, we will discuss the data that we have gathered in the current study and its implications for further studies. We would welcome feedback as we explore approaches that can help students become more resilient when learning to program. We are especially concerned with supporting students who have negative emotional experiences.

Michael Lee. Using Gidget to Teach CS1 in a College Course

Gidget is a freely available, online educational debugging game designed to teach novices computer science concepts. We designed Gidget as a discretionary game that anyone could access online and play at their own pace and leisure. Several studies have shown that Gidget is effective at engaging novice programmers [1, 4, 5, 6] from all over the world [2] and that users who play through the game show measurable learning outcomes [3].

Due to the success of Gidget in discretionary contexts, we were curious to see if we could replicate our positive results in a compulsory educational setting. Although we have used Gidget in several summer camps for teenagers [1, 5], playing through the game was largely regarded as an enrichment activity complemented with other computer science related exercises. Therefore, to explore our curiosity in a formal educational setting, we created a freshman-level CS1 course at a private university in Connecticut, USA, centered primarily on using Gidget as the language of instruction (with an introductory CS textbook providing additional exercises and explanation of concepts). For this lightning talk, I present our syllabus and discuss preliminary observations from our semester-long course.

Jernigan, W., et al. (2015). A Principled Evaluation for a Principled Idea Garden. IEEE VL/HCC, 235–243.

Lee, M.J. (2015). Teaching and Engaging with Debugging Puzzles. University of Washington Dissertation (UW), Seattle, WA.

Lee, M.J., and Ko, A.J. (2015). Comparing the Effectiveness of Online Learning Approaches on CS1 Learning Outcomes. ACM ICER, 237–246.

Lee, M.J., and Ko, A.J. (2011). Personifying Programming Tool Feedback Improves Novice Programmers’ Learning. ACM ICER, 109–116.

Lee, M.J., et al. (2014). Principles of a Debugging-First Puzzle Game for Computing Education. IEEE VL/HCC, 57–64.

Lee, M.J., Ko, A.J., and Kwan, I. (2013). In-Game Assessments Increase Novice Programmers’ Engagement and Level Completion Speed. ACM ICER, 153–160.

Eileen Kraemer, Murali Sitaraman, Russ Marion, Cazembe Kennedy and Gemma Jiang. Applying Complexity Leadership Theory to the Adoption of Evidence-based Practices in Computer Science Education

Much research has been conducted on evidence-based teaching practices in the context of Science, Technology, Engineering, and Mathematics (STEM) education. Although the best practices are shown to engage students in learning and broaden participation, adoption of these practices has been slow. We attempt to tackle this adoption problem through the use of Complexity Leadership Theory (CLT). The change that we seek to enable is the adoption and creative application of evidence-based practices in Computer Science education at Clemson University so that best practices such as active learning become the new norm. This is exploratory research as CLT has not previously been used in this context. To gauge our success, we will use complexity network analyses, attitude surveys, and other measures of engagement with respect to active learning. We solicit feedback from ICER attendees on practices that promote the adoption of best practices in CS Education.

Given the importance of student engagement for learning, the proposed project will benefit all students. Evidence-based practices such as active learning methods are particularly beneficial for STEM students from disadvantaged backgrounds, students from underrepresented groups, and female students in male-dominated fields. The benefits of the project will impact the entire faculty and student population of the School of Computing, and will institutionalize best practices that will continue to serve students in the future. The project will inform us how to effect change in academic institutions so that best educational practices are widely adopted. By developing an emergent model for creative change, the project can enable diffusion of best research practices in STEM education to reach educators and students, across not only computing, but all STEM disciplines.

Poster only

Yusuf Pisan. Peer Teaching Feedback Based on Classroom Observation

While the value of peer-feedback is universally acknowledged, most lecturers do not receive any peer feedback on their classroom teaching. Even academics who are familiar with computing education find it difficult to incorporate new teaching strategies gleaned from conference papers into their own classrooms. Previous research interventions, such as Peer Assisted Teaching Scheme (PATS) [1], have failed to make a long-term impact beyond the duration of the project. Similar schemes continue to be implemented across different universities [2], showing very positive results throughout the implementation period, but the impact disappearing within five years of the official programs termination.

There is a wide gap in how change in teaching is viewed from administrators and lecturers perspectives. For administrators, success is measured by number of participants, positive comments and documented changes in classroom teaching. For lecturers, there is a tendency to just go with the imposed intervention as voicing dissent can get them labeled as resistors or “not team players”. Lecturers use a “practicality ethic” when evaluating and adapting changes to their teaching, namely 1) Will this change problems I face now rather than the problems somebody else has identified, 2) How much time and effort is needed, and 3) How can it be adapted for my particular set of students [3].

We are currently implementing a peer teaching feedback project at the [Author Affiliation], going down the well-travelled road of successful intervention, but minimal long-term impact once the project is concluded. However, this time we are focusing on not the project’s success but in changing lecturers approach to teaching. We are interested in receiving feedback and ideas on what can be done as part of this project to achieve long-term cultural change around teaching.

[1] Peer Assisted Teaching Scheme (PATS) Downloaded from http://vera195.its.monash.edu.au/

[2] Peer Observation of Teaching (2015) A Discussion Paper prepared for the Peer Observation of Teaching Colloquium, University of Queensland. Downloaded from http://itali.uq.edu.au/filething/get/1923/PeerObservationTeachingDiscussionPaper.pdf

[3] Doyle, W., & Ponder. G. (1977). The practicality ethic and teacher decision-making. Interchange, 8, l–12

Jo Coldwell-Neilson. Decoding Digital Literacy

Digital literacy was originally conceptualized as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers” [1]. It has

evolved to incorporate elements derived from other terms such as information literacy and media literacy and is now used to describe anything related to technology or computers. It is interchanged with behaviours, understanding how technology works and, more broadly, the role of technology in daily operations.

It is for this reason that digital literacy is hard to define. The complexity of the term creates challenges for educators who are responsible for equipping students for employment in the digital age, regardless of discipline. It also poses challenges for students who are expected to

acquire, at worst an unknown and at best fuzzy, set of capabilities. Often staff and students’ expectations are not aligned, causing significant issues for both parties [2].

This poster will argue that a shared understanding of digital literacy should be built and a digital literacy benchmark developed for students entering and graduating from Australian higher education institutions, bridging the gap between school skills and workplaces skills. This understanding will provide grounding and insight for disciplines to interpret digital literacy graduate learning outcomes in their context and thus, improve graduate employability.

[1] Glister,P.1997.Digitalliteracy,WileyComputerPub.(p.1)
[2] Coldwell-Neilson,J.2013.ManagingExpectations:achanginglandscape.InMcKay,E

(Ed.) EPedagogy in Online Learning: New Developments in Web Mediated Human Computer Interaction. Chapter 1, 1–17. ICI Global

This research is supported by Australian Government OLT National Teaching Fellowship FS16- 0269.

Kumpei Tsutsui and Hideyuki Takada. A Classroom SNS to Develop Creative Thinking Skills in Programming Learning

Recently, workshops involving programming have been held in elementary education in order to promote creative thinking. As a model for developing creative thinking skills, a spiraling cycle of Imagine, Create, Play, Share and Reflect is emphasized [1]. In our experience of conducting programming workshops at elementary schools for over ten years; however, we cannot have enough time to let them Share and Reflect their creative activity. Share and Reflect activity lead to self-evaluation and peer-evaluation which provide them motivation.

In this poster presentation, we will explain a classroom SNS systems and an iPad application for children to post and share their projects. On this SNS site, they are able to post three kinds of contents: action view, code view, and comments. By recording the process of creation with iPad and sharing their projects with others on the SNS, children have an opportunity to review their projects. At the same time, children who are inexperienced in programming can refer to others’ projects to expand their idea.

This SNS site has some specific features to motivate children. When posting their projects, they can choose to hide the action view or the code view. They need to ask a password which contributor set in order to see the hidden part of the post, leading to promote interaction among children. The handwriting function enables children to write their comments easily. Taking a screenshot of the action and codes with iPad, they can select the most appealing part of their project to share. The stamps allow them to express their impressions simply and with fun.

[1] Resnick, M., 2007, June. All I really need to know (about creative thinking) I learned (by studying how children learn) in kindergarten. In Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition (pp. 1–6). ACM.

Francisco Enrique Vicente Castro. The Impact of Lightweight Discussions of Program Plans in First-Year CS

Most programming problems have multiple viable solutions that organize the underlying problem’s tasks in fundamentally different ways. This requires students to make multiple choices in crafting solutions. Some choices focus on lower-level concerns such as which language constructs to use (e.g., for versus while loops). Higher-level decisions include how to cluster problem subtasks into individual functions or code blocks. The organization and clustering of subtasks is often called a plan [4]. Which plans students implement and value depends on the solutions they have seen before [1, 3] as well as features of their programming language [2]. Many first-year courses teach different equivalent low-level constructs without also discussing different higher-level plans. How much exposure to planning do students need before they can appreciate and produce different solution plans?

We report on a study in which students in two introductory courses at different universities were given a single lecture on planning between assessments. Surprisingly, that one lecture sufficed to get students to produce multiple high-level plans (including ones first introduced in the lecture) and to richly discuss tradeoffs between plans. This suggests that planning can be taught with fairly low overhead once students have a reasonable foundation in programming.


[1] Francisco Castro and Kathi Fisler. 2016. On the Interplay Between Bottom-Up and Datatype- Driven Program Design. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education. SIGCSE ’16. New York, NY, USA: ACM, 205–210.

[2] Kathi Fisler, Shriram Krishnamurthi, and Janet Siegmund. 2016. Modernizing Plan- Composition Studies. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education. SIGCSE ’16. New York, NY, USA: ACM, 211–216.

[3] Robert Rist. 1989. Schema creation in programming. Cognitive Science (1989), 389–414. [4] E. Soloway. 1986. Learning to Program = Learning to Construct Mechanisms and

Explanations. Commun. ACM 29, 9 (September 1986), 850–858.

Leo Porter, Mike Clancy, Cynthia Lee, Cynthia Taylor, Kevin Webb and Daniel Zingaro. Course-Level Learning Goals for Basic Data Structures

Many of us would agree that Basic Data Structures is an essential topic in computing. Basic Data Structures are often taught as part of a second programming course (CS2) and in many curricula, CS2 is a gateway course to upper­division courses. Moreover, data structures are a common topic in industry interviews, effectively solidifying data structures as core computing knowledge. But what, precisely, do we want students to learn about data structures?

We find that educators generally agree that basic data structures are core to CS2, but they may disagree on what, in particular, students need to learn. For example, is the goal to learn how to use an Abstract Data Type (ADT)? Is the goal to be able to implement a wide variety of data structures? Or is the goal to firmly understand the performance implications of using (and/or designing) data structures?

Establishing a set of learning goals for this topic enables the community to co­develop course materials, articulate curricular demands, and assess learning. Hence, the goal of our work is to carefully articulate course­level learning goals for Basic Data Structures. To this end, we have spoken extensively with a diverse set of instructors who commonly teach CS2 at their respective institutions and have, through these discussions, arrived at a draft set of course­level learning goals. Here we ask for your formal, or informal, critique of these learning goals.

Our poster session is aimed to motivate the audience to care about solidifying these learning goals for the community. Attendees will be able to nominate particular learning goals and offer suggestions for improving ours.