Zixin Z’s 2023 Q&A: Features & Content Policy

[Note: There will be 4 Q&A posts total, covering all the topics brought up during the user-submitted Q&A period. Candidates were limited to 300 words per answer.]

Would you be in favor to expand further features of the Archive to improve user experience? If so, what features do you think Ao3 needs to add or improve? What AO3 features would you prioritize to help people avoid what they don’t want to see?

Yes, I agree that Archive of Our Own (AO3) can and should expand its features to improve user experience! As I have mentioned in my platform (Question 3, Paragraph 2), I think it would be very helpful for non-English speaking users if they could browse the website in their mother language. Personally, I would also like to have a list of my kudos records, but I understand that to add features would require a great amount of work for the Accessibility, Design and Technology (AD&T) volunteers, and there are similar workarounds to these feature requests. For example, there are unofficial tools that allow users to see if they have opened or kudosed a certain work in search results. The “Unofficial Browser Tools” section in AO3 Frequently Asked Questions is a great place to find resources to satisfy one’s need for features that are not available on the Archive yet!

As for features to help people avoid certain contents, I think the most direct way to do so is to check off the Archive warnings if they do not wish to see one or more categories under current warnings. AO3’s Terms of Service (ToS) requires creators to tag their works with appropriate Archive warnings (see ToS IV.K. and FAQ). If a user believes a work lacks sufficient warning, they can contact Policy and Abuse Committee (PAC) by submitting the contact form (temporarily closed as of 13 July 2023 due to DDoS attack). If what they wish to avoid is not included in Archive warnings, they can use Additional Tags to filter out contents. There are also unofficial tools such as AO3 Saved Filters and AO3 Savior so that users don’t have to enter the same search filter every time.

Do you support adding additional mandatory archive warnings (for example, warnings for incest and slavery), and do you think this is feasible?

I think whether to add additional Archive warnings or not and the feasibility of doing so depends on the specific warning proposed. According to the ToS, if a user reports a work to have insufficient Archive warning, PAC is obliged to investigate and determine whether said work contains contents relevant to the Archive warning or not. Therefore, it is very important to ensure that any new Archive warnings added to AO3 are very clear and can be investigated and lead to reasonably consistent conclusions each time. In the examples given, I don’t think “Incest” has an easy standard if it is added to Archive warning. The definition for incest is different in different time and cultures. In China, marriage between cousins was not considered as incest in the past, however it is generally believed to be incest now. If there’s a work where 2 cousins get married is set in Ancient China, should the creator be asked to add “Incest” warning?

Another important factor to consider for adding Archive warnings is that the proposed content should be viewed as something that readers need to know in advance in most cultures. Again, taking “Incest” as an example: while some English-speaking fandoms might consider it as a problematic trope, incest (especially sibling incest) can sometimes be quite common in most East Asian fandoms as far as I’m aware. I understand the feeling that users might want to avoid seeing incest-related works and might not be satisfied with the existing tag filter, but adding it to the Archive warning list would feel like adding “Mafia AU” to the list for some East Asian fans. At the moment, I do not think it is feasible to add new Archive warnings; however, I am open to exploring alternative methods for addressing these issues.

What is your stance on AI scraping/learning from the Archive and AI produced works on OTW platforms?

Regarding AI scraping/learning, I believe and support that fan creators should reserve the right to decide if they want their works to be included in AI learning dataset. Currently, using AO3 works in AI training sets for learning purposes is not against U.S. laws. Our Legal Committee has also presented this stance to the U.S. Copyright Office. Meanwhile, if there are new technologies to protect the website from unwanted scraping, I would actively look into it and consult my colleague in Support and AD&T to see if they are feasible to implement.

Regarding AI-produced works, I don’t consider the content a violation of AO3 ToS. Due to policy-related reasons which I will elaborate in my answer to the question below, it is very likely that AI-produced works will be allowed and stay on the Archive if there is no significant development to the technology which alters the ethical and/or legal nature of AI-produced works.

While I don’t agree that companies such as OpenAI should use fanworks to train AI models without the creators’ consent, I also don’t think the person using the model should be responsible for it. It is also worthy to mention that AI models can be a useful tool for non-native speakers to beta their works in English, to translate their materials, and to communicate with other fans. Some users rely on AI models to generate a trope or offer writing suggestions, which makes their work only partly produced by AI and partly by their own effort. Banning AI-produced works might hinder the maximum inclusivity principle, as it may discourage creators who edit their works with spelling or grammar correcting tools from posting.

In your opinion, what would a sensible policy regarding ai-generated content on AO3 look like? How would you enforce this policy such that NO human fic writers are harmed in overzealous attempts to reign in ai-generated content, as seen on art platforms which attempted an ai ban? Do you think AI is something that PAC can accurately detect and regulate/restrict?

I would like to start my response by answering the last question: no, I don’t think PAC can accurately detect if a work is created by AI models. As far as I know, there is no AI detector that can tell if a content is AI-generated with 100% success rate. I also don’t believe that any person can determine whether a fanwork is created by AI. Even if a work is written in the tone of AI models and the creator claims that it is AI-generated, it is still possible that the person is trying to mimic AI’s writing style.

Therefore, I don’t agree that the Archive should ban AI-generated works, because there are no effective ways nor any reliable tools for PAC to enforce such policy. For the same reason, I also don’t think that it should be added as a new Archive warning, because mandatory warnings have to be enforceable as well. There has not been a canonical Additional tag for AI because Tag Wrangling Committee has paused No Fandom Freeform canonisation for several years due to various reasons such as tech limitations. The Committee is going to reopen discussions on the topic soon, and “Work Created with AI” is one of the top priorities on the list. After the canonical additional tag is created, users can more conveniently filter contents that they wish to avoid.

Since adding additional tags is completely voluntary for creators, it would also reduce the possibility of potential harassment. If a user witnesses a person harassing creators for posting AI-generated content on AO3, they can always report the ToS violation to PAC.

How do you feel about AO3’s principle of maximum inclusivity of fanworks? Are you willing to uphold AO3’s commitment to protecting content that many consider controversial or problematic? Where do you personally think the line should be drawn with respect to AI, racism, etc? What are the candidates thoughts on content currently being hosted on the site, including the Archive level Minor warning, and how it relates to the sites availability in various countries?

I think maximum inclusivity is one of the most fundamental principles of AO3 and OTW, and it is important for the OTW and AO3 to uphold this principle. I have upheld the commitment in my work in PAC as we often receive user reports of works containing controversial topics that do not violate AO3 ToS, and we would explain our principle and why such works can stay on the Archive to reporters. I am willing to expand said commitment to my future Board work if I am elected.

For my opinions on AI, please see my answers to the above two questions. Regarding racism and other controversial topics, I think the line should be drawn where the content is explicitly harassing other users, be it an individual or a group. Maximum inclusivity can only be sustained by mutual respect. It does not mean that AO3 tolerates harassment.

If the “Minor warning” in the question refers to the Underage warning, then I think these contents do have a place on AO3 just as other works. I understand that in some countries, consuming fictional content containing underage sex is illegal. However, it is not possible for the Archive to abide by laws and regulations in every country where our users live. The OTW is incorporated in the U.S., and only has to abide by U.S. laws. Neither of the countries where AO3 is banned has stated their reason for the ban was specifically due to Underage content hosted on the Archive. For users living in countries that prohibit consuming fictional explicit underage content, I recommend that they avoid clicking into works tagged with said warning. They could also use the unofficial tools mentioned above to save their search preferences.

What measures will you take to better protect creators from harassment on Ao3? Would you implement methods to protect creators from harassment in Bookmarks? Eg. Creators can set “disallow/hide comments or tags on public bookmarks or when a user changes their private bookmarks with notes to public”. Or options to delete or respond to bookmarks?

For general measures to protect creators on AO3, I think expanding the size of PAC would increase our efficiency in dealing with harassment reports, as we are currently understaffed: PAC have lost a number of volunteers after the CSEM attack in May 2022, and we had to postpone our recruitment plan due to the attack and other reasons. I also hope that the ToS update can give PAC clearer guidelines in determining harassing content, so that we can better support creators and other users.

I do agree that harassment in bookmarks is an issue that needs to be addressed! Unlike comments, creators cannot choose to disable or moderate bookmarks on their works. At the moment, creators only report bookmarks after the fact to prevent further harassment. Since there’s currently no way to report a bookmark directly, the creator could only do so by reporting the work itself or the harasser’s userpage. I think it would be more convenient for users to report bookmarks if there’s a way to report specific bookmarks.

Another solution in my personal opinion would be to extend the block function to bookmarks. This way, if the creator does not wish a certain user to leave public bookmarks on their works, they can choose to block said user, and all public bookmarks this user leaves on their works will become private bookmarks, stopping further possibilities of harassment. Extending the block function to bookmarks instead of giving creators the options to disable, delete or respond to bookmarks also prevents AO3 from reducing an important Archive function for readers.

Preserving fan culture is a OTW mission, but when preserving & recording history, how do you think say Fanlore can acknowledge, warn or prevent replicating of harassment & hate speech? In your volunteer experience, what resources are available for volunteers & users on what to do when encountering such cases?

I am not a Fanlore volunteer, and I do not use Fanlore frequently or edit content on the wiki, so I can only discuss it from my limited knowledge of the project and inputs from my colleagues. In terms of fannish history preservation and record, I think it is important to strive for accuracy and record all relevant narratives while upholding to standards of the least amount of harm toward fans, fanworks and fan communities. That being said, harassment and hate speech is not a part of the future for a fandom we would like to see, but I don’t think it is beneficial to combat them by erasing these histories from the wiki.

Fanlore volunteers introduced me to their Plural Point of View Policy, which acts as a guidance for editors to stay objective and inclusive. Anyone can register a Fanlore account and edit the wiki content, so if a person has concerns about a certain article, they’re welcome to offer their input and discuss with other editors in the article’s Talk page, or edit it directly. I think it would be helpful if Fanlore could develop certain labels for articles that contain harassment & hate speech materials to better warn readers of triggering content. However, I understand that deciding whether or not to adopt a new policy and the implementation process requires careful discussions, which would take time and effort of Fanlore volunteers and editors.

Fanlore volunteers suggested that the Talk page is a great place for editors to discuss their opinions on certain articles when there are controversial topics. Fanlore also has a Discord server for editors and volunteers to hang out and communicate. If there are any additional resources Fanlore would like to have, I would try my best to provide such resources if I am elected.

How important do you think it is to focus on making sure the AO3 software continues to be developed and improved so other people can set up their own archives with their own content and conduct policies?

While it would be nice to have ready-to-go AO3 software for other fans to set up their own Archive, there are limitations on feasibility. First of all, the complexity of the Archive code requires current volunteers to focus on maintaining the Archive itself rather than working on making the software more accessible to other archivists. AO3 is in beta, meaning that we’re still in the process of developing and testing features. If we are to provide a more accessible software package, I hope it will be the more stable, well-rounded version in the future.

That being said, AO3’s code is publicly available on Github, and if anyone wishes to set up their own archives using AO3’s code, they are free to do so at any time! I do want to point out that AD&T volunteers informed me that given the Archive’s complexity, using our code to build another archive might be more difficult than using an existing web content management system such as WordPress. Therefore, using the Archive’s code as the foundation may require the developer to be more equipped in IT, coding and web-building knowledge. Regardless, to my knowledge, there are at least two archivists who have used AO3’s code to build their own archives (I truly admire the efforts they have taken and the hours of work they have spent building their own archives using our code), and I would love to see this number grow!

Comment bots at AO3 are a growing problem. While some of the fixes for that are “better spamblockers,” would you be willing to promote something like OpenID to allow comments from people without AO3 accounts?

There has been high volumes of spam comments in the past several months indeed! But I do think that our spamblocker, Akismet, is doing a relatively good job in blocking spam comments. Generally, a certain spam pattern can be effectively blocked under a reasonable amount of time. We have also shared information about spam comments with our users on social media to avoid confusion. Currently, if a creator is worried about getting spam comments, the most effective way is to change the work setting to “Only registered users can comment”. Enabling comment moderation can also avoid having spam comments shown in the work’s comment section.

According to information provided by AD&T volunteers, AO3 used to allow users to log in using their OpenID accounts, but we stopped supporting it 8 years ago because only very few users chose to do so. I am not familiar with OpenID, but if registering an OpenID account only requires one email verification, then it might not be that difficult for spammers to mass-register OpenID accounts and then leave spam comments on AO3 even if we add OpenID as a requirement for leaving anonymous comments. I think one of the more effective ways to prevent spam is to enable reCAPTCHA, but as far as I’m aware it is not the most user friendly tool and is very likely to reduce site accessibility.

Fandom cultures can vary significantly. How would you best reflect the specific fandom’s expectations in tag canonization and synning? May I please know if you support speeding up the conversion of large and small non-canonical tags into Canonical ones? Canonical tags make it easier to include or exclude works from search.

In tag wrangling, each wrangler assigns specific fandoms of their choice to themself and takes care of the character, relationship and additional tags in their fandoms. Usually, the wrangler would choose fandoms they are familiar with and have general knowledge on the fandom’s practice and preference such as character name choices and usages of additional tags. In this case, they can canonise tags customised to the fandom’s needs in addition to following Wrangling Guidelines. If nobody wrangling a certain fandom is familiar with the fandom, they would still research the canon and the fanworks to see which format works best for users. If a user thinks a certain tag is not correctly formatted, they can always contact Support Committee to offer suggestions.

Currently, graduated wranglers are expected to wrangle tags in their fandoms at least once every two weeks and keep the tags under one month old. This is because different fandoms receive vastly different volumes of tags: some fandoms have thousands of incoming tags every week that require multiple wranglers checking the bin frequently, while smaller fandoms may only occasionally receive a couple of tags. Each wrangler also has their own pace for wrangling: some prefer to do a bunch of wrangling on the weekend, while some others tend to wrangle every day. I think the current timeframe standard for tag canonisation (one month) is a reasonable expectation on volunteers considering the variety in workload and workstyle. Again, if a user notices a tag that hasn’t been wrangled (attached to a fandom) for a prolonged period of time, they can contact Support.

No Fandom Freeform Canonical tags’ canonising process has been suspended as I mentioned above, but the discussion will resume soon and I hope new canonicals will help users better filter works in the future.