Thursday, June 14, 2012

Web Design is About Communication

This spring, I created a new course on web design. As part of the process, I consulted with two experts who work regularly with web design. As a professor and course designer I highly recommend this practice. I sent a major assignment via email to both experts and got some very interesting feedback. First off, the experts suggested that I (and almost anyone who designs courses on web design) was going about things the wrong way.

The original assignment had well defined requirements in which students were required to create a website for a fictitious client according to specific requirements. But the experts suggested that there is rarely a time when a client knows what he or she wants and can explain that to you. A lot of the web design process involves communicating with the client to find out what he or she needs and negotiating with the client to determine what is wanted and what can reasonably be done within the project timeline.  The experts suggested that in addition to the technical skills of being able to create a web site, I should require students to learn communication skills to help them negotiate with clients.

Any web design class can teach about HTML and CSS, but a high-quality class helps students practice the skills they need to be able to communicate and negotiate with clients to provide a quality web design.

Learn more about E-Learning at Northern State University

Friday, May 11, 2012

Free Alternatives to Adobe Photoshop and Illustrator


I hear a lot about Adobe products lately. As I look at the job market for my students at Northern State University, many job descriptions in the E-Learning field list familiarity with Adobe Products as requirements or qualifications for the position. However the individuals that write these descriptions are usually not familiar with free and open-source software (FOSS) alternatives to these programs that can do 80-90% of what the Adobe products do. In some cases, such as Adobe Flash and captivate, it is hard to find a good free/open-source alternative, but in the case of photoshop and illustrator, there are some great programs out there that are absolutely free and very functional that I have used with great success:

Adobe Photoshop alternatives

GIMP - http://www.gimp.org/
GIMP has been around for years, yet there seems to be relatively little knowledge of its existence among E-Learning professionals. GIMP is the ultimate photoshop alternative, offering fine-grained photo enhancement, retouching, cropping etc. features.

Google Picasa - http://picasa.google.com/
Picasa is a great photo organizer/editor similar to iPhoto (but free and open source). It doesn't have all of the bells and whistles of GIMP but it does offer quick and easy photo enhancement, retouching and cropping and the most recent version offers new image filters for fun photo effects.

Adobe illustrator alternative

Inkscape - http://inkscape.org/
Inkscape is an excellent vector graphics editing software package. It is my go-to application to help me design and edit graphics for E-Learning projects.  I recently used inkscape to create some icons for a touch screen project and it worked very well:



These FOSS applications don't have all of the bells and whistles that the Adobe versions have, but in the end it doesn't matter what tools you used to create your e-learning project, as long as it is effective!

Learn more about E-Learning at Northern State University

Tuesday, September 9, 2008

Minimal Guidance is Relative

I recently read an interesting article entitled Why Minimal Guidance during Instruction Does Not Work that brought up a whole new world of thoughts about constructivism into my mind. The article uses three main arguments to explain how constructivist approaches to learning have not worked including Sweller's cognitive load theory, theories about long term and short term memory, and differences between experts and novices.

The first thought that I had with regard to this article is how do they define “work,” or whether a certain activity is working or not. It is made clear that this is when learned items are stored in long term memory. But aren't there many other ideas in the field of whether something works or not based on other criteria? For instance, instead of just storing something in long term memory, shouldn't we be able to perform in some greater capacity than we were able to before an activity. What if motivation to learn is a specific problem, then shouldn't this be used as a criteria for what works in an activity? And what of problem-solving ability, is this not very useful in our information age?

The authors also blanket all types of subjects and learners into one great whole when they say that guided instruction works better than minimal guidance. The only distinction that is made is between novices and experts. I see a more comprehensive continuum between novices and experts and at some point, I think we are better off giving minimal guidance as learners become more experienced.

Also, one of the main tenets of the article is the idea that so many constructivist activitys are done with too little guidance and that adding guidance is admitting that constructivism is inadequate. I don't think this is the case. Based on my experience working with some constructivists, the authors' view of constructivism is very different from what actually happens. In fact most would agree that giving no guidance is ineffective for promoting learning on either side of the spectrum. It seems to me that the difference is that constructivists want to give enough guidance to help the learner along, but not so much that they stifle the creativity and problem-solving ability of the learner.

Lastly, the authors mention that giving learners complete and correct information is the best method for learning to occur. A constructivist may say “whose information are they being given?” In other words, constructivism may challenge the notion that there is one correct version of the information irrespective of the situation or knower of the information.


Reference:

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance during Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75.


Monday, June 9, 2008

Digital Games Based Learning?

A friend of mine once told me that we are not here to entertain students, we are here to teach them. In that situation I wholeheartedly agreed. I had just been trying to teach a Sunday school class to a group of digital natives with no attention span having limited success. But then I thought about the field of Instructional Technology and how many in the field try to entertain students, in fact a large part of formative evaluation involves asking students how they felt about the instruction in question and what they would do to improve it. Some of these answers (especially those coming from digital natives) may ask for more entertainment in teaching.

In my own opinion instructional designers should be required to make what they are teaching relevant and useful to their students now or in the future, but they should not have to entertain students beyond that. Maybe your opinion is different, but consider that in order for our instruction to compete with other things digital natives do, it would have to meet the highest quality standards of a video game production. This is doable, but only with a lot of time and money.

A common complaint among digital natives is that they are bored, but what this really means is that they are not stimulated as much as they could be when they are doing something more stimulating. Boredom is relative. Instructional Technology often tries to cater to digital natives' needs by creating instruction that is in video games, or other digital media that they are used to. While these efforts are commendable, I wonder if they are somewhat misguided because ultimately school must help get the next generation ready to work in meaningful jobs.

I am not saying that all jobs are boring, but I am saying that most jobs now require workers to stick to a task that digital natives would consider boring. Life is full of "boring" things that have to be done. While video games teach problem solving skills and critical thinking, the very problems that are being solved are more often than not very different than real life tasks that students will do in the future. Do these problem solving and critical thinking skills learned in a very exciting environment actually transfer to "boring" or real-life tasks that people do in their job? I don't know, but I think that this is what we should be asking in digital games based learning. Perhaps instructional design efforts should be made to help people become creative enough to take care of their own boredom problems. Or maybe "paying-attention" skills need to be the focus of some instruction to digital natives.

Brett Shelton discusses using a commercial game for education in his book, The Design and Use of Simulation Computer Games in Education. He mentions that something called unintentional learning happens in these types of games, which is not useful from a design standpoint (2007, p. 108). I have not read any studies that prove that general skills learned from video games (like problem-solving and critical thinking) transfer to any real-world situation (perhaps you have). But I think the best efforts of digital game based learning teach specific skills that are relevant to real-life.

In an excellent article entitled Game-Based Learning: A Different Perspective, Karl Royle explains that games and education have been and are still largely mutually exclusive (2008). But Royle proposes a model of instruction that comes from the field of instructional technology that will allow for the blending of these, problem-based learning. Royle explains that problem-based learning in a video game would require the learner to complete a complex, real-world and authentic task by applying rote information found in the game (2008). In other words, learners would be able to learn and apply useful information to a unique real-world and relevant task.

This is the type of approach that I see being useful from an instructional design standpoint. It instructs and makes use of only relevant media. It has been proven that media does not influence learning through thousands of no significant difference studies. Therefore any irrelevant media added to instruction will not make any difference in real learning and can often be distracting. Many video games do this. In contrast, there have been many great efforts to create instructional games using relevant, real-world tasks and I think these are the only useful ones for education. After all, we are not here to entertain students.

References:

Royle, K. (2008). Game-Based Learning: A Different Perspective. Innovate, 4(4).

Shelton (2007). Designing Educational Games for Activity-Goal Alignment. In Shelton, B. E., & Wiley, D. A. (2007). The Design and Use of Simulation Computer Games in Education (p. 316). Sense Publishers.


Wednesday, June 4, 2008

Instructional Technology Blames Teachers

Instructional Technologists always give teachers a hard time. It seems like all the talk that I have heard lately about teaching practices has been negative toward the teacher who does the bad teaching practice. But bad teachers are too often an easy target and I think that they are only a small part of the reason for less-effective instruction in college campuses today.

In the many discussions that I had while completing my Master's degree in Instructional Technology, all bad teaching practices were attributed to the teacher who was doing them. It is really easy to blame the teacher, after all, they are the person who does the act, and as Master's students just getting into Instructional Technology, my classmates and I would look upon our limited past experience with college including our undergraduate classes. The most visible instructional component of these classes was our teachers.

But what if there were deeper roots to bad instruction than just the teachers themselves? After having taught and worked to create courses for higher education, I can say there are. I have never met a single teacher that would not like his/her students to learn something. This is no longer a question in teacher's minds. The real question is how to do this when teachers are put under the constraints that higher education imposes upon them.















"congratulations graduates, this diploma signifies that you have sat in your classroom seat for a certain amount of hours and have received arbitrarily fabricated grades from overworked and underpaid instructors. You are now ready to do something totally different than you learned in college ;)" (photo provided by Josh Thompson)

Instructional Technology has come up with some great ways to help students learn better, quicker, and in more depth, but few of these methods even consider the constraints that teachers in higher education are put under, fewer still help alleviate them. Many theories claim that they do when they really don't. I think that some of the most valuable work being done in Instructional Technology involves systemic change in public education.

A professor that I work with currently teaches several freshman classes full of ninety students each semester, is required to do research that requires extensive travel and time to write and submit manuscripts, and must serve on several committees for the university. He wakes up at about 4am each morning to get to work and usually stays there until 6pm. He then goes home to have dinner and then does reading and research for the rest of the night each day. This is typical here. I know of at least 3 other people whose schedules are similar. They do this work because they are required to by the university. This is a regular university workload.

The field of Instructional Technology generally defines behaviorist practice such as lecture and multiple choice tests as bad teaching and assessment practices. But these are the very same practices that help teachers be more efficient in their teaching. For instance, lectures can be the same every time allowing teachers to create it only once and then deliver it many times. Multiple choice tests in testing centers allow assessment of student's knowledge without having to involve the teacher.

Some Instructional Technologists sit there an wonder why more teachers are not doing more good instructional practices like peer interaction, group projects, authentic tasks, task-centered instruction and so on. Some become angry at teachers for their bad practices, but the reason is that they don't have time! All of those practices will ultimately take more of a busy teacher's time, and taking more time on teaching and less on research could put a college teacher's job in jeopardy. Imagine approaching the professor that I work with and telling him that he does not teach well, and he needs to change his curriculum to be more task-centered. You explain that this will take more time to do, but in the end students will learn more and enjoy the class better. Most professors would respond that they just don't have the time, and this answer is perfectly honest and acceptable. The time in one day cannot be increased even by a minute.

There is a lot that needs to change in colleges today, but I think that it needs to start not with teaching practices, but with the structures that are in place that do not allow teachers to engage in good teaching practices.

Wednesday, May 14, 2008

Authentic Tasks or Task-Centered Instruction?

I recently read an excellent article by Jan Herrington, Tom Reeves and Ron Oliver about Authentic tasks entitled Authentic Tasks Online: A Synergy among Learner, Task, and Technology (2006). This article begins by saying that the most common online learning tries to break down information into digestible chunks and distance education needs to be seen as part of a synergistic system.

The authors believe that the authentic tasks model will fill this need for synergy. They give some guidelines for authentic tasks, outlining what an authentic task is. When I first read this, I realized that the authentic tasks outlined in this article are somewhat different than the whole-task approach we have been working on in our higher-education institution. We work with Dr. M. David Merrill to follow First Principles of Instruction (2002), converting traditional lecture-based classes into task-centered ones.

Authentic tasks seem more drawn out than the tasks Merrill talks about in First Principles (2002), but I think the distinction is blurred somewhat. Either way, the tasks that we have been implementing in higher education class are usually shorter in length (1-2 weeks to complete) and the tasks mentioned by Herrington, Reeves and Oliver (2006) can take a whole semester to solve, or no less than a third of that semester.

Among the definitions of authentic tasks listed in this article, several stand out to me:

Authentic tasks are ill-defined, students have to define the tasks and sub-tasks to complete, also they are open to multiple interpretations and solutions.
Authentic tasks provide the opportunity for students to examine the task from different theoretical, practical perspectives. students must distinguish between relevant and irrelevant information.

The items above require that students must choose their own methods for solving the problem, and they must examine the task from differing perspectives. This is perhaps the furthest departure from what we are doing with our classes. In our approach, heuristics and rules of thumb are provided for solving a problem and students are taught how they might go about solving the problem themselves. Pitfalls with our approach may include decreased authenticity of the task (people in the "real-world" don't have someone showing how to do a problem, they are just asked to do it), and a lack of creativity of solutions (students will solve the next problem in much the same way as the first).

Pitfalls with the authentic tasks method stem from it's sink or swim approach. Students will have little guidance on where to start (although often the technology provides affordances), or what process to take in solving the problem.

Overall, I still think that it is best to scaffold students' performance with some guidance for completing the task or they may fail. Without this guidance, it is easy for a student to become frustrated and give up on a complex task. Perhaps vanMerrienboer's 4CID model (1997) is a good middle ground, tasks are kept complex, but are scaffolded only as much as students need and this scaffolding is removed as student performance increases.

The article also mentions:

Authentic tasks can be integrated and applied across different subject areas.

In dealing with the realities of higher education we have not been able to integrate differing subject areas to a very high degree. But in our approach, we have successfully combined English as an International Language instruction with Biology. I see no pitfalls with these approaches except perhaps that students will become confused, but only because they have been taught within the confines of subject areas for so long.

Also, authentic tasks would be very difficult to implement on a full scale in our outdated education systems. There is a very strong mentality that information should be broken down into manageable chunks and then fed to students. The whole system of education, from colleges, to schools to programs to courses to credits follows this approach. If things like authentic tasks are going to take off, this mindset will have to change and the idea of courses will have to go away.

Another pitfall with the authentic tasks approach is that we are asking students who are novices to do what professionals do and to produce professional work. I like the high expectations that this conveys, but students are not experts. They will not produce completely professional work unless it is in a very narrow topic area. At the same time, many undergraduate students do not take their education seriously enough to produce work at this level.


References:

Herrington, J., Reeves, T. C., & Oliver, R. (2006). Authentic Tasks Online: A Synergy among Learner, Task, and Technology. Distance Education, 27(2), 233.


Merrienboer, J. J. G. V. (1997). Training Complex Cognitive Skills: A Four-Component Instructional Design Model for Technical Training. , 338. Educational Technology Pubns.


Merrill, M. D. (2002). First Principles of Instruction. Educational Technology Research and Development, 50(3), 43.

Friday, May 9, 2008

Peer-Assessment in Higher Education

I have recently done a mini-review of Peer-Assessment in higher education. The results of studies are mixed but generally support that peer-assessment is as valid as instructor assessment as long as it is scaffolded properly.

Here are a few of the articles I looked at and what they said:

  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and Reliability of Scaffolded Peer Assessment of Writing from Instructor and Student Perspectives. Journal of Educational Psychology, 98(4ov), 891.

    • Students and instructors do not trust peer-grading schemes, however, there is high reliability and validity in these schemes if done correctly.

    • Instructors often have no time to grade and therefore skim through assignments while peer graders will take time on each assignment, judging its quality more in depth

    • Instructors grade papers with no other help, while peer graders will usually grade multiple papers, giving each paper a combined rating from multiple peers. Bias can be reduced with this method.

    • Studies in the past have either predicted high or low validity in peer grading, they produced mixed results, some because of errors.

    • Even when peer grade validity is high, students may not perceive it as such.

    • Self assessments are generally less accurate than peer assessments and are often more influenced by self-esteem than actual performance. This is one of the reasons students often feel that peer-assessments are not accurate.

    • Multiple peers should be used to rate each other

    • What an instructor views as reliable peer assessment is usually different than what a student views as the same.

    • Overall, peer assessment may be more valid than instructor assessment because multiple people are rating a single work, instead of a single person rating a single work.

    • Peer review is part of student's learning process.

    • Concerns about reliability and validity are not valid reasons to shy away from peer assessment.

  • Lejk, M. & Wyvill, M. (2001b) The effect of the inclusion of self-assessment with peer assessment of contributions to a group project: a quantitative study of secret and agreed assessments, Assessment and Evaluation in Higher Education, 26(6), 551–561.

    • Peer assessment is better done without self assessment

  • Magin, D. J. (2001) A novel technique for comparing the reliability of multiple peer assessments with that of single teacher assessments of group process work, Assessment and Evaluation in Higher Education, 26(2), 139–152.

    • Group members are generally more able to assess each other than mentors or teachers

  • Struyven, K., Dochy, F., & Janssens, S. (2008). The Effects of Hands-On Experience on Students' Preferences for Assessment Methods. Journal of Teacher Education, 59(1), 69.

    • Teachers and student teachers generally react negatively to forms of assessment that they are not used to

    • Traditional assessment methods were often negatively looked upon by students and alternative methods were perceived to enable quality learning

  • Kilic, G. B., & Cakan, M. (2007). Peer Assessment of Elementary Science Teaching Skills. Journal of Science Teacher Education, 18(1), 91.

    • Peer scores significantly correlate with instructor scores

  • Ryan, G. J., Marshall, L. L., Porter, K., & Jia, H. (2007). Peer, Professor and Self-Evaluation of Class Participation. Active Learning in Higher Education: The Journal of the Institute for Learning and Teaching, 8(1), 49.

    • One study with 144 students in higher ed led to a 0.83-0.9 correlation coefficient between instructor and student ratings with forced distribution

    • Another similar study with another 144 students led to a correlation of 0.72 between instructor and student ratings.

    • This study found that rankings were statistically different but not academically different (not enough to affect a student's grade).

    • Problems arising from group grades include “inflated grading of friends, lack of discrimination among members of a group, individuals dominating to seek higher marks, and students who do less work but still benefit from a group grade.” forced distribution or ranking reduces all of these problems to some degree.

    • Students did not like this type of grading overall.

    • Forced distribution (ranking) of each other's grades affected whether students gave a higher or lower grade to their peers.

    • Peer assessment should be scaffolded

  • Wen, M. L., & Tsai, C. (2006). University Students' Perceptions of and Attitudes toward (Online) Peer Assessment. Higher Education: The International Journal of Higher Education and Educational Planning, 51(1), 27.

    • Peer assessment can increase student-student interaction, enhance students understanding of other student's ideas, increase learner's understanding in the cognitive and meta-cognitive domains, and develop transferable and social skills

    • Peer assessment methods should make criteria clear to students

    • Anonymous assessment may produce better validity of assessments

    • A study with 280 college students found that most felt it appropriate to use peer assessment as a small portion of their grade

    • Students had a positive attitude toward peer assessment

    • Results suggest that “more effort needs to be placed on giving students responsibilities for grading, to develop a sense of learner control and ownership of their own learning, especially in higher education.

Recent efforts of ours to implement a group peer-ranking system into a general education course have come under attack by uninformed people with authority over the course. This will provide some good points of discussion as we go through the process of testing the course.