Your group deserves kudos for creating a wonderful interactive experience for us all and for conceptualizing the details at every turn to make this possible. This is certainly a methodological approach to finding opportunities and by also using the collective brainpower of our cohort. But in keeping with the activity of providing a critical analysis of this data, it should be noted that this is a limited exercise with incomplete information. Some have noted the obvious gaps in the “assessment and evaluation” categories and possibly for the K-3 group. It could be that the products selected do have a formative evaluation and assessment function but we were only asked to select 1 or 2 out of the 4 categories available. I think as well, we all bring certain biases into the types of apps or tools we selected based on sector we work in. As I work in higher education, seeking apps that address the primary grades were not of interest to me. Perhaps going through this process as a market analysis with a larger group of professionals might yield more accurate results.
However, the Primary Product Domain chart seemed very inclusive and seemed to be applicable to many learner segments. Someone made the comment that gaps could be due to “educational tech” still being in its infancy. On the contrary, I think a lot of good edtech software has been around for a long time. The gap to me, is in making teachers and faculty aware of what’s out there and the direct application to their learners. As well, the lag time between making instructors aware, convincing institutions to invest in the product and then training the instructors to use the software – all before the next technology comes to market needs to be shortened greatly. This would, of course, be our jobs as graduates of this program 🙂
However, I would love to keep your OER as a reference since there are so many apps I would like to explore further when there is more time. Thank you again!