Week8-reading watching blog-QU HONGYU 굴홍우
Summary:
Wikipedia's reliability and guidelines for citing sources. Wikipedia requires all content to be based on reliable, published sources and to cover both majority and significant minority views. The article details how to judge the reliability of sources, emphasizing the independence, fact checking, and accuracy of sources. Different types of sources (such as academic material, news organizations, editorial reviews, etc.) are also used differently in Wikipedia. The article also mentions that the reliability of the source depends on the context, and editors need to determine on a case-by-case basis whether the source is appropriate to support a statement. In addition, the article also deals with how to deal with outdated sources, news aggregation sites, commercial sources and other special cases.
Wikipedia's handling of references to biased or opinionated sources Wikipedia requires articles to be neutral, but reliable sources don't necessarily need to be neutral. Sometimes a biased source may be the best option to support a point of view. When using these sources, editors need to judge whether they meet the criteria of a reliable source, such as fact checking, editorial control, and independence. In addition, the article discusses the unreliability of dubious sources (e.g., extremist websites, self-published content) and user-generated content (e.g., social media, forums), and emphasizes that these sources are generally not appropriate to support controversial claims. In particular, the article mentions that AI-generated content (such as ChatGPT) is often unreliable because they can generate false information or "hallucinatory" content. Finally, the article also deals with the question of how to select reliable sources in specific situations (such as medical claims, fringe theories, citations and so on.
Interesting point:
The article mentions that AI-generated content is often unreliable because they can generate false information or "hallucinatory" content. This phenomenon is increasingly common in the dissemination of information today, especially in the fields of journalism, academia and social media.
Discussion:
Do you think that with the advancement of AI technology, it is possible that we will find a way to effectively verify the reliability of AI-generated content in the future?
We may see platforms that blend AI speed with human oversight — for instance:
ReplyDeleteAIs generate content
Humans (or specialized models) verify or annotate it
The result is logged, archived, and rated by reliability
Think: "AI writes it, humans approve it" — like peer review for algorithms.