
Recently, I engaged in a deep conversation with several close friends about a complex sentiment sweeping through our society regarding artificial intelligence. We’ve observed a common reaction of rejection: no matter how exquisite an AI-generated painting, how rigorous a paper, or how moving a piece of music, its value seems to be instantly diminished by the label “Made by AI.” Some call this “AI discrimination,” an apt term, but I believe what it reveals is far more profound than the word “discrimination” can contain. This is a storm concerning our very sense of self, and we are standing in its eye.
We must admit that this instinctive rejection does not stem from ignorant prejudice but is rooted in a deep and legitimate identity anxiety. For millennia, humanity has regarded wisdom and creativity—the fruits of inspiration, sweat, and long processes of learning—as the core values that distinguish us from all other things. The rise of AI, however, is shaking this foundation at an unprecedented speed. When a machine can accomplish in an instant a skill that took us a decade of hard work to master, the panic of “Where does my uniqueness lie?” is both real and piercing. What we reject is perhaps not the AI itself, but the reflection of a future where our own value might be diluted. What we defend is the sanctity of the human “process”—the struggle, the failure, the epiphanies, and the emotional investment of creation, all things we consider to be value itself.
However, if we shift our gaze from our inner anxiety to the long river of history, we find that this scene is familiar. When photography was born, painters scorned it as a “soulless mechanical reproduction,” vowing it could never ascend to the temple of art. When the electronic synthesizer appeared, musicians denounced it as “cold industrial noise,” incapable of expressing true emotion. But what was the result? Photography pioneered a new visual art form, and the synthesizer gave birth to countless new genres of music. History teaches us that humanity’s instinctive resistance to disruptive technology is a recurring social stress response. It is a period of painful adjustment, a transitional phase.
Even more interestingly, today’s “discrimination” and resistance are, in fact, playing the role of an invisible “social feedback mechanism.” It is precisely because of public concern over copyright, authenticity, and information pollution that tech developers are forced to consider watermarking and traceability for AI-generated content. It is because of creators’ fears of being replaced that we are compelled to explore the boundaries and future of human-AI collaboration. From this perspective, the storm is not purely destructive; it is also shaping a more responsible outline for the future of technology.
So, where do we go after the storm? Merely understanding the anxiety and reviewing history is not enough. True wisdom lies in piercing through the fog of emotion and beginning the work of building a new order.
I believe we must abandon the simplistic binary opposition of being “pro-AI” or “anti-AI.” The key is to learn to differentiate our approach. We should establish a coordinate system of “source sensitivity” to examine AI’s application. In fields that place extreme importance on process, responsibility, and authenticity—such as academic research, news reporting, and legal documents—we must draw strict red lines and regulations for AI use, because in these contexts, how something is done and who does it are as important as the result. However, in other areas that prioritize efficiency, function, and the final experience—such as commercial design and creative assistance—we should adopt a more open, results-oriented judgment, appreciating excellent work regardless of whether AI played a role in its creation.
Based on this, our path forward becomes clear. We need to establish industry standards and clear labeling for the degree of AI involvement, as transparent as a food ingredients list. We need to incorporate AI literacy into our education system with the same rigor as we teach academic citations. We need to reform the incentive mechanisms of current platforms, shifting the rewards from “more and faster” to “deeper and more refined.” And for every one of us, whether creator or critic, we need to engage in a profound self-dialogue: Am I objecting to poor quality, or am I objecting simply because it was “effortlessly achieved”?
Ultimately, we will discover that AI is not here to “replace” us, but to “migrate” the center of gravity of our value. It liberates us from complex, executive labor, compelling us to invest our energy in higher dimensions beyond the machine’s reach: the wisdom to define problems, a unique aesthetic sense, the vision to construct worldviews, the ability to integrate across disciplines, and most importantly, the courage and ethical judgment to bear final responsibility.
We are standing at the entrance of a new era. Resisting the tide is futile; instead of building a dam in vain, we should learn how to guide the current. When excellent work generated by AI is no longer seen as a threat to human value but as a new tool extending from human intellect, only then will we have truly mastered this transformation. At that point, the phrase “AI did a great job” will no longer be a source of anxiety, but the highest praise for us humans—as the masters of thought, the arbiters of value, and the ultimate creators.