Quality Analysis of an Interactive Programming Learning Platform Based on ISO/IEC 25010 Using a String-Matching Approach on User Reviews
##plugins.themes.bootstrap3.article.main##
Abstract
This study aims to analyze user perceptions of Quality in Use of the Khan Academy e-learning platform, focusing on two key characteristics defined in the ISO/IEC 25010 standard: satisfaction and efficiency. Based on the analysis of user review data, there is a clear difference in volume between the two aspects: 539 reviews (65%) reflected the satisfaction aspect, while only 290 reviews (35%) were related to efficiency. This indicates that users are approximately 1.86 times more likely to comment on satisfaction than on efficiency. For satisfaction, most of reviews were positive (420 reviews, or 77.9%), while 119 reviews (22.1%) expressed negative sentiments. These results suggest that most users are satisfied with their experience using Khan Academy, particularly due to factors such as flexible access time, user convenience, and the wide availability of learning materials. In contrast, the efficiency- related reviews exhibited a more even distribution, with 154 positive reviews (53.1%) and 136 negative reviews (46.9%). This closer balance indicates that while some users appreciate the platform's performance, others report encountering technical issues, including slow access speeds, navigation difficulties, and system instability. Overall, user perception of the Khan Academy e-learning system is generally positive, especially regarding satisfaction. However, the findings also underscore the importance of addressing technical performance challenges to improve efficiency and ensure a seamless learning experience. These insights provide a valuable basis for the development of user-centered e-learning systems and contribute to the evaluation of system quality from the Quality in Use perspective.
##plugins.themes.bootstrap3.article.details##
[2] S. M. Mousavi and others, “Emerging Trends in E-Learning: Challenges and Solutions,” Educ Inf Technol (Dordr), vol. 26, pp. 2897–2913, 2021.
[3] Google Play Store, “Khan Academy – User Reviews,” 2023.
[4] Class Central, “The State of MOOCs 2023,” 2023.
[5] Y. Zhao, H. Xu, and X. Wang, “Investigating the Influence of E-Learning on Student Performance in Higher Education,” Comput Educ, vol. 168, p. 104211, 2021.
[6] A. Kumar and G. Harit, “Text Mining Techniques for Analyzing User Reviews in Educational Apps,” Procedia Comput Sci, vol. 199, pp. 865–872, 2022.
[7] A. Hassan, M. Ahmad, and S. Saeed, “Sentiment Analysis of User Reviews Using NLP and Machine Learning: A Review,” Journal of King Saud University – Computer and Information Sciences, 2021.
[8] International Organization for Standardization, “ISO/IEC 25010:2011 Systems and software engineering – System and software quality models,” 2021, ISO, Geneva.
[9] J. Ming, J. Wu, and Y. Li, “Evaluating Software Quality-in-Use with User Reviews: A Multi-Perspective Approach,” Journal of Systems and Software, vol. 188, p. 111277, 2022, doi: 10.1016/j.jss.2022.111277.
[10] J. D. Silva, A. Carvalho, and L. Lima, “Using Text Mining to Analyze User Feedback in Software Applications,” Inf Process Manag, vol. 58, no. 5, p. 102679, 2021, doi: 10.1016/j.ipm.2021.102679.
[11] Y. Yin, S. Lin, and X. Zhang, “Text Mining in Smart Education: Applications, Challenges, and Research Opportunities,” IEEE Access, vol. 8, pp. 164301–164317, 2020, doi: 10.1109/ACCESS.2020.3022105.
[12] S. Budhrani, A. Thomas, and M. George, “Comparative Analysis of String Matching Algorithms in Big Data Environment,” Procedia Comput Sci, vol. 185, pp. 92–99, 2021, doi: 10.1016/j.procs.2021.05.010.
[13] A. Kumar, M. Singh, and Z. Khan, “String Matching Techniques for Noisy Text Data: Applications in Real-World NLP Tasks,” Journal of Information Science and Engineering, vol. 39, no. 2, pp. 145–159, 2023.
[14] H. Kaur and A. Sharma, “Data Preprocessing Techniques for Text Classification: A Review,” International Journal of Computer Sciences and Engineering, vol. 10, no. 1, pp. 23–30, 2022, doi: 10.26438/ijcse/v10i1.2330.
[15] R. Gupta, P. Sharma, and R. Kumar, “A Review on Text Preprocessing Techniques in Sentiment Analysis,” International Journal of Advanced Science and Technology, vol. 29, no. 5, pp. 10657–10666, 2021.
[16] S. Albahli, I. Ahmad, and A. Hussain, “Preprocessing Techniques in Sentiment Analysis: A Comparative Review,” Journal of Intelligent & Fuzzy Systems, vol. 45, no. 2, pp. 2049–2059, 2023, doi: 10.3233/JIFS-223207.
[17] D. Al-Fraihat, M. Joy, and J. Sinclair, “Evaluating E-learning systems success: An empirical study,” Comput Human Behav, vol. 102, pp. 67–86, 2020, doi: 10.1016/j.chb.2019.08.004.
[18] L. Indriani and A. R. Jatmiko, “Analisis Kepuasan Pengguna Website PPDB Online Dengan Penerapan Metode Webqual 4.0 dan IPA (Studi SMK Negeri 1 Labuan Bajo),” J-INTECH, vol. 12, no. 02, pp. 207–218, Dec. 2024, doi: 10.32664/j-intech.v12i02.1282.
[19] L. A. Hussein and M. F. Hilmi, “The influence of convenience on the usage of Learning Management System,” Electronic Journal of e-Learning, vol. 19, no. 6, 2021.
[20] R. M. Yılmaz, F. G. K. Yılmaz, H. T. Öztürk, and B. Sezer, “Investigating the effect of e-learning systems on academic achievement and attitudes: A meta-analysis,” Educational Technology Research and Development, vol. 69, no. 6, pp. 3345–3371, 2021, doi: 10.1007/s11423-021-10045-6.
[21] M. C. Saputra and T. Katayama, “Proposal of a Method to Measure Test Suite Quality Attributes for White-Box Testing,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 5, 2021, doi: 10.14569/IJACSA.2021.0120535.