To the extent that current practices fall short of best ethical practices, this project will seek to craft recommendations for promoting sustainable ethical practices in the English Wikipedia, Wikinews, and other Wikimedia projects which could benefit from a sober examination of sound ethical principles.

Ethical Management of the English Language Wikipedia

Scalability of ethical management & the MediaWiki software edit

Management of any wiki will involve aspects of software maintenance from the ideal viewpoint. Unfortunately, the software appears to be neglected more and more over time, as what were useful features become hurdles to an ever greater audience. In the case of Wikipedia, ethical management of the project relies on the functionality and implementation of MediaWiki software. Due to its very size and scope as an encyclopedia, Wikipedia gives pragmatic evidence of what kind of automation can be made to help in that purpose, and it helps here in the purpose to learn ethical management.

  Advanced studies of this resource may require Computer Science.

Humans make the software, and they must make ethical decisions on how to create and maintain the software. The software is the medium the community engages with each other, so the software is also involved to make ethical choices. Sometimes, it is more ethical to engage in that feedback instead of a more direct engagement with another member of the community, but that feedback needs to be created. Another way to say it is: it might be more wise to change the software than to make attempts to change one or more members of the community.

There are also other projects that dive deeper into the MediaWiki software itself, but if those projects don't receive any feedback from those that use the software, then that is where this resource can help bridge the gap between ethical management of the wiki and the MediaWiki software development cycles.

From an engineerial perspective, the issue is easily recognized as scalability factors. The audience and users of the software tend to grow faster than the number of developers and maintainers of the software. The developers are usually aware of these factors. There is, however, out of all the issues to address, the issue to discover which ones are priorities. There is, also, the question of how does a developer know how to correctly model and program a solution. Without any feedback, or without the proper feedback, the developer spends more time in their research on how to improve the software than actual time spent in active implementation. Further, the developer tends to default back on improvements or fixes to the current model due to too much time required in research instead of being able to properly go through all phases of the system development life cycle to address scalability.

 

In the five phases of the Systems Development Life Cycle (SDLC), you are a part of the audience that is addressed by the maintenance phase. You interact with the engineers (the scientists, or other liaison roles) that work together to improve the system. From your perspective in the maintenance phase, the rest of the SDLC cycle may appear opaque to you, yet it is considered the last phase of the cycle. Once feedback is gathered from you the cycle continues; the engineers restart in the analysis and planning phases. If there is no feedback gathered or the feedback is too improper, the cycle ends; the system development comes to a standstill.

If the cycle ends, those left to manage the wikis have to make do with the tools they have available. Further, if trouble arises when there is no mechanism for feedback, people are left to place blame on others where they don't work well together or with what is available, and that course of action, the blame game, may not be the best ethical solution. When it is the engineers themselves that get blamed in such course of action, and they do not get feedback because they are being ignored, it becomes much harder, if not impossible, for them to research how to improve the system. Remember that it is the software that is the medium for which the community engages each other. If that medium fails, communication fails. This is where we learn how to bridge the digital divide.

Pragmatics & automation edit

We can pragmatically look at several areas of Wikipedia and pinpoint areas of automation. The hard part is what is known as the guru mediation error. Users recognize that as a crash, a BSOD, or some other software failure, but these are particular cases of errors. On the engineerial side, the software development cycle include case history, and some of that case history may include personal or private information. It would be easy for the engineer to explain why something was coded in a certain way if there was no worry about privacy issues, but there are privacy issues (and proprietary issues in other software). The engineer and the scientist are left with the task to create ways to explain why the implementation exists the way it does and avoid any potential disclosure of these issues. Instead of actual disclosure, it is called a guru mediation. Consider the art to create a true explanation without disclosure, and you will understand why they are called gurus, and the whole act is a mediation. Pragmatically, these mediations became generalized and enumerated, and you get the error number you see on a crash -- if you get one. There aren't enumerated guru mediation errors in the social engagement of Wikipedia or the wikis even if the the factors can be seen by the developers.

There are events that have happened in such social engagement that can be automated, and the pointy finger issues, due to an ended-cycle, can be avoided with such automation. There are many techniques to accomplish this goal. One that has some popularity is to build a case history and convert those to use-cases. The case history is not mandatory as use-cases can be brainstormed. Consider the dispute resolution process of Wikipedia, and there is potential to even combine those processes as use-case events. The dispute resolution process probably won't be the same after the use-cases are considered and automation steps are taken. One thing can be done, however, and that is for those that manage the dispute resolution to recognize when use-cases can be made. This process is not without proven pitfalls, as we must be aware of wrongful social engineering attempts. Despite that, it is also easy to quickly toss out ideas to automate or change processes, but it may not be so easy to live with those that get implemented and have a flood of them get labeled as mistakes.

Feedback helps, so here are some issues to address and to automate management:

Resources edit

See also edit