View of slow river with forested and grassy riverbanks.

Image: Public domain CC0

As a nonprofit organization working in the open education space, we have witnessed first-hand the frenetic pace of all things AI, particularly as they relate to education. While several members of our team have eagerly explored ways in which AI tools could be applied to their work, we felt the need to take a collective deep breath and intentionally consider how—as an organization dedicated to making learning and knowledge-sharing participatory, equitable, and open—we should apply our own organizational values to the ways in which we engage with AI. Openness is a foundational pillar of the work we do because we believe that an open ethos is key to the development of equitable and inclusive learning environments, and contributes to the creation of a more just society. We share these values with many others who work in the open education space, as well with those in open source, open access, open data, and open science.

ISKME, from early on, has worked collaboratively to generate a set of cultural values to guide us as a learning organization dedicated to advancing an open ethos. We felt that strong principles and healthy behaviors were necessary pillars to guide a team to focus authentically on the principles of openness, continuous learning, and knowledge sharing. Now, we set out to apply our values to create a set of guiding principles in our approach to AI. These principles are intended as a commitment to the values of “open” in all parts of our work, from libraries of open content, to pedagogical practice, and platform and tool development.  

We believe that through meaningful community participation and collaboration, we are able to ensure our work prioritizes solutions that meet real needs and build towards equitable outcomes from the outset. In this way,open” is an ethos that brings with it a set of values and ways of doing our work.

Open Ethos. We will bring values of “open” to the ways in which we work with AI.

Commodification. We will support the common good by imagining AI tools that help communities develop and thrive. We will work with partners to be in a reciprocal relationship with us, where a user does not need to opt in to commodification as a rule of play, and the development of the tool is not profit maximization.

Provenance. We will recognize where the information that is used to train models comes from. Additionally, content must be verifiable, as a means to respect individuals’ rights, to maintain accountability, authorship, and attribution.

Data sovereignty. We will respect and recognize that all individuals have the rights to their own data.

Informed consent. We will get consent to use a person’s or organization’s data for AI purposes. Specifically, active consent is the presence of a yes, not the absence of a no.

Intentional use. We maintain that AI is a tool, and not a solution in and of itself. Therefore, we will acknowledge the intended goal of using AI, and use it judiciously, recognizing that its benefits do not come without costs, whether financial, moral/ethical, or environmental.

Transparency. We will be transparent in the use of AI. This includes how it is being used, the model’s reasoning and how it made decisions, the role of human oversight, and in the case of content creation, specifying that it was made using generative AI.

Bias. We will proactively strive to evaluate and remediate bias in its design and application, recognizing that the lived experiences of the tool creators inform its design and functionality. Also that the inverse is true: those excluded from the design do not benefit equally.

Continuous Evaluation. We will continuously evaluate the benefits, promises, challenges, dangers, and unknowns of the use of AI, which will help us and our communities to make informed decisions.

We offer these principles as a starting point, a way to provide direction while acknowledging the rapid evolution of this technological space. We realize that creating a set of guiding principles is a bit like trying to put a stick in the mud of a moving river. However, just as placing a stick in the flowing river can help us observe the water’s movement and flow, judge the river’s depth, or sense where there might be solid ground beneath, these principles will serve as our anchor. This simple physical act by us as humans cannot be replicated by code. 

In some ways, the stick itself embodies these guiding principles. We chose to place the stick in the mud of the moving river because we do not wish to simply sit on its banks watching it flow.  As the Greek philosopher Heraclitus understood, the river will be changed, as we have changed it by simply stepping in it. And we will have changed as well, because you cannot step into the same river twice.

We invite partners and others in the educational community to develop their own guiding principles by reusing ours, reflecting their unique organizational values and perspectives and sharing them in this dynamically changing landscape.

Licensed by ISKME CC-BY-SA