Design and Development of a Self-Assessment Tool and Investigating its Effectiveness for E-Learning

Manisha Domun 1  and Goonesh K. Bahadur 2
  • 1 Lifelong Learning Cluster
  • 2 Virtual Centre for Innovative Learning Technologies, University of Mauritius, Reduit, Mauritius


One of the most effective tools in e-learning is the Self-Assessment Tool (SAT) and research has shown that students need to accurately assess their own performance thus improving their learning. The study involved the design and development of a self-assessment tool based on the Revised Blooms taxonomy Framework. As a second step in investigating the effectiveness of the SAT, 1st year student of the BSC Educational Technology program from the VCILT, University of Mauritius were used as testing sample. At this stage the SAT was provided to only half of the sample who were randomly chosen and placed into a treatment group. The remaining half (Control Group) had the normal conditions on the E-learning platform. A semester exam was devised and administered to the whole sample to find out if there was a difference between the scores of both groups. Lastly a feedback form was given to only the treatment group to find out their views on the SAT. The results indicated a significant difference in scores between the treatment and the control groups when the Student’s Independent T-test was used. Group A percentage of passes were higher compared to Group B. Failures were recorded for both groups with an increased rate of failure for Group B compared to Group A. Moreover, most of the respondents’ feedbacks suggested that SAT was a useful guide with helpful feedbacks. The findings concluded that SAT was viewed more as a revision tool that allowed them to assess their own learning.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • 1. Anderson, L. (2002). Curricular Alignment: A Re-examination. Theory into Practice-Revising Bloom’s taxonomy.

  • 2. Anderson, L.W. and Krathwohl, D.R. (eds.) (2001). A taxonomy for Learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman.

  • 3. Baume, D. (2009). Writing and using Good learning outcomes. Leeds Metropolitain University Journal.

  • 4. Brahmawong, C. (1991). Techniques of Writing Self-Learning Modules. National Institute for Multimedia Education (NIME), Ministry of Education, Chiba, Japan.

  • 5. BYU Community (2013). Copyright 101. Retrieved December 01, 2012, from BYU Copyright Licensing:

  • 6. Chang, N. (2009). Can Students Improve Learning with their use of an Instructor’s Extensive Feedback Assessment Process? In International Journal of Instructional technology and Distance Learning, 6(5), (pp. 49-63).

  • 7. Crump, C. (2005). Designing meaningful and fair tests and assignments: A handbook for teachers. Antigua & Barbuda: Printing and Publishing Co.

  • 8. Carless, D.; Joughin, G. and Mok, M.M.C. (2006). Learning-oriented Assessment: Principles and Practice. In Assessment & Evaluation in Higher Education, 31(4), (pp. 395-398).

  • 9. El Mansour, B. and Mupinga, D.M. (2007). Students’ positive and negative experiences in hybrid and online classes. In College Student Journal, 41(1), (pp. 242-248).

  • 10. Ghosh, M. (2008). Creating Assessment Questions in an eLearning Course. Random Ideas. India.

  • 11. Kostons, D.; Van Gog, T. and Paas, F. (2012). Training self-assessment and task-selection skills: A cognitive approach to improving self-regulated learning. In Learning and Instruction, 22, (pp. 121-132).

  • 12. Krathwohl, D.R. (2002). A Revision of Bloom’s Taxonomy. OHIO. The H.W. Wilson Company.

  • 13. Mandernach, B.J. (2003). Quality True-False Items. Retrieved 09 October 2012 from Park University Faculty Development Quick Tips.

  • 14. Marinagi, C. (2011). Web-based adaptive self-assessment in Higher Education. In A. Méndez- Vilas (ed.), Education in a technological world: communicating current and emerging research and technological efforts.

  • 15. Mayer, R.E. (2001). Multimedia Learning. New York: Cambridge University Press.

  • 16. Mayer, R.E. (2002). Rote versus Meaningful Learning. In Theory into Practice, 41(4), (pp. 226-232). doi:10.1207/s15430421tip4104_4a

  • 17. Murayama, K. (2003). Test format and learning strategy use. In Japanese Journal of Educational Psychology, 51(1), (pp. 1-12).

  • 18. Nayak, B.K. (2010). Understanding the relevance of sample size calculation. In Indian Journal of Ophthalmol, 58(6), (pp. 469-470). Accessed on 17 November 2010.;year=2010;volume=58;issue=6;spage=469;epage=470;aulast=Nayak

  • 19. Park, O-C. (1996). Adaptive instructional systems. In D. H. Jonassen (ed.), Handbook of research for educational communications and technology, (pp. 138-153). New York: Macmillan

  • 20. Reiner, C.M.; Bothell, T.W.; Sudweeks, R.R. and Wood, B. (2002). Preparing Effective Essay Questions -A Self Directed guide for Educators. New Forums Press.

  • 21. Riffell, S.K. and Sibley, D.H. (2003). Learning online: Student perceptions of a hybrid learning format. In Journal of College Science Teaching, 32(6), (pp. 394-399).

  • 22. Gagné, R.M. (2004). The Principles of Instructional Design. Cengage Learning.

  • 23. Roediger, H.L., III. and Butler, A. (2010). The critical role of retrieval practice in long-term retention. In Trends in Cognitive Science, 15, (pp. 20-27).

  • 24. Schulz, A. (2005). Effectively using self assessments in online learning. 18th Annual conference on Distance Teaching and Learning. Illinois: The Board of Regents of the University of Wisconsin System.

  • 25. The Independent-Samples t-Test. (n.d.). Available at

  • 26. Vella, J. (2002). Learning to listen, learning to teach: The power of dialogue in educating adults (Rev. ed). San Francisco: Jossey-Bass.


Journal + Issues