Driven by curiosity, human beings have embarked on a long journey in questing for the wonders of the universe. We use science as a systematic enterprise; we build and organize knowledge in the form of testable explanations and predictions about the universe. The fine structure of its constituents and their reactions are known as chemistry, while the DNA carriers and oxygen users are covered by biology; the laws of physics govern the dynamics of which, all these branches of knowledge unite together to decipher our beautifully mysterious universe for more than thirteen billion years.

In the last five centuries, in physical science starting from Galileo and Newton to Einstein and Higgs, almost all scales of the universe have been explored. Our knowledge of the universe divided into three parts, matter that we know 4%, dark matter about 25% and the rest is dark energy. In 17th century, Newton described the macroscopic world through classical mechanics to formulate gravity. The classical picture of mechanics vis-a-vis nature failed to explain the structure of matter and its dynamics at short distances. That led to the era of quantum physics in the beginning of 20th century. In the other hand, in 1915, Einstein formulated the theory of gravitation to equate the dynamics of the space time geometry to the energy of matter. The fact that quantum mechanics and general relativity have been found fundamentally incompatible stands as the greatest failure of twentieth century science, and provides the greatest challenge at the dawn of the twenty-first century.

All motivations of theories are based on the idea of finding one mathematical framework to derive all the laws of nature. Since 1970s, some theories such as string theory seek both quantized gravity and grand unified theory within one framework as a theory of everything. String theory in 26 dimensions mutated from an unsuccessful theory in hadrons physics to prospective unified theory of all interactions in ten dimensions. Reiner Hedrich said: “String theory is no theory at all, but rather a labyrinth structure of mathematical procedures and intuitions… It has no clear and unambiguous homological basis; no physically motivated fundamental principle is known. ” Leonard Susskind said: “… with absolutely no experimental basis, string theorists constructed a monumental mathematical edifice”. One of the ways out is to take a step backward and redefine the concept of time, space, matter, energy and vacuum, not from mathematics but from observations.

We have come to an encounter a very critical stage of the natural triangle humans-knowledge-nature. Facing the greatest “crisis” - the “stagnation” of science today is led by vectorisation of mathematical frame works, and romanticizing of pure theories. Mathematics as the “universal language” function as frame work for science, nonetheless it unavoidably suffers from its own fate of “epistemological obstacle”, epistemological leap is needed, “laws of physics” may not be so “lawful” and perhaps need to be re-written. It may be helpful to examine the scientific methods itself, and head back early approach of nature philosophy for the yet unknown, or perhaps even for the unknowables.



目前来看,似乎所有的理论都在尝试找一个能够推演一切自然法则的数学理论框架,自1970年代开始,某些物理理论如弦理论就尝试糅合重力的量子化和统一场论为一个统领一切的理论。26维的絃理论从强子物理中不是特别成功的理论中演变到未来所预期的包括所有(粒子)反应的10维的统一场论。莱纳·海德里希(Reiner Hedrich)曾说:“弦理论与其说是一种真正的理论,不如说是一座由数学过程与直觉构造出的迷宫……它既没有清晰明了的同源基础,也没有任何从物理学出发的根本法则被验证。”伦纳德·萨斯坎德(Leonard Susskind)也说过:“在没有任何实验论证的基础上,弦理论家们生生构建了一幢纪念碑般宏伟的数学华厦。”走出这个迷宫的方法之一,就是后退一步,非从数学框架角度,而用观察来重新定义时间、空间、物质、能量和真空的概念。

人类现已步入权衡人、知识与自然这三角关系的关键阶段。数学体系的向量化与对纯理论的浪漫主义情结,使科学陷入停滞不前的巨大危机。数学 - “宇宙通用语言”,作为科学研究的框架依托的同时,无法挣脱其“认知论障碍”的命运。因此,认知论层面上的超越成为必需:物理定律并不全为“定”论,可被推翻和重写。或许我们应当审视科学方法论本身,重回自然哲学之道,尝试揭秘尚未知的、甚至不可知的事物。


The Chinese modern has always been distinctive from Western counterparts for the unique nature of the collective: today, technological formats such as internet search browsers, e-commerce platforms, and chat apps assume different roles than the same technologies assume in the West.

The depoliticized discourse of technology in California becomes much more ideologically charged in China, land of human flesh searches, taobao villages, and a vast, interlocking web of humans whose internal articulations and struggles to attain selfhood often are mediated by Wechat.

We explore the significance of Foucault’s notion of biopolitics and how it could help us to understand what China’s new technological mediatized society means, and how it fits into the history of Chinese political and artistic thought: China's online world is one of a population always outrunning whatever algorithms enclose them.

Questions about copyright law, data privacy, and online security are really political questions about the border between the public and the private, and the nature of ownership: the folk practices of China's internet show an ongoing resistance to the emergence of politically autistic digital landlords.

The Chinese internet is merely a representation of the China's urbanizing geography, featuring the same dialectic of a population encountering law in a specifically defined territory: but how can we use biopolitics to understand a population defined not in terms of shared life, but shared information?

Chinese “culture,” whatever that might mean, serves as an algorithm, a sorter of the data points that human beings have become as they travel across the national web of circulation which is the greatest of all Chinese technologies.

A phone with a handful of apps on it is all we need not only to negotiate and navigate the Chinese city but even to find our own place in it; not only to locate where we are on the map or where the nearest coffee shop is, but who the others are, what kind of relationships we have with them, how we can buy and sell.

China’s 1.5 billion population are the data, the contradictory history of New China the algorithm; when we crack the code, we’ll have reached a utopia of total circulation, where all data can move freely along the web, whether that’s the spatialized web of the city, the halfway abstracted one of the economy, or the pure flow of information itself. What would an autonomous mass look like?










An Open Letter


Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents systems that perceive and act in some environment.

In this context, “intelligence” is related to statistical and economic notions of rationality colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields.

The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI.

Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose.

We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.








这些思考促使了AAAI 2008-09 Presidential Panel on Long-Term AI Future (人工智能技术未来长期发展主席论坛)的产生,以及其他探讨人工智能社会影响的项目,也大大拓展了人工智能自身的领域范围。到目前为止,这一领域范围大部分集中在社会尊重与实际目的中保持中立态度的技术上。





Unlike VR and AR, the concept of MR (Mixed Reality) is more perplexing, because it’s not “reality” that is being mixed, but how the V (virtual) and the A (augmented) can be mixed is untold. What I proposed is a more inclusive concept, namely, expanded reality (ER), that refers to a self-contained virtual world as a result of a merge between networked VR and IoT (Internet of Things). In this world, humans or post-humans are immersed in an artificial environment and control the manufacturing facilities through teleoperation while they remain immersed. Such a scenario is comparable to the movie scene as presented in The Matrix. In fact, I already built a Human-Machine Interfacing Lab wherein a mini ER model is equipped. In what I call a Teleportation experience in the lab, the transition lines between the virtual and the real are already erased on the experiential level. In a way, all my effort in designing and constructing such a lab points to the concern of how to prevent this new type of technology from being controlled and manipulated by a handful of people as an efficient tool for impeding human freedom and dignity. There are two important factors involved in this concern. First, it is about how to come up with more precise and acute analysis of the structural changes in our socio-political life due to the science and technology advancement, and draw on technology itself to prevent non-transparent technological manipulation. The development of technology should always be carried out in an open social space, such that technological power would not be monopolised by a small number of political or commercial manipulators. Secondly, it is about how to imbue technological achievements with humanistic rationality.  Philosophers, artists and social scientists should be actively involved throughout technological innovation. Based on their attentive observation of the process, they should voice their most pressing concerns about the moral, aesthetically and social issues to those who work in areas of science, technology, industry and, especially, in political affairs. We should establish a humanistic ethical firewall against technological abuses.

Prof. Philip Zhai

Head of Human machine interconnection lab





When biology first became a study, it was still in a convoluted state, at a cross-section between theology and natural history. In the 20th century, it entered its golden age. During the establishment of modern biology as a science, two significant factors played significant roles: one was the ideological transformation from holism to reductionism, and the other was “experiment” becoming the standard procedure in practice. From this point on, cell biology, biochemistry, genetics, and embryology all began developing rapidly, facilitating each other. In 1953, DNA’s double helix structure was discovered. Scientists then tackled a series of problems related to “central dogma”, making the “DNA-RNA-Protein” process a classic that was added to textbooks. Thenceforth, the paradigm shift of biological sciences was complete. The era of molecular biological officially started.

After the new paradigm shift was complete, scientists generally stopped questioning existing theoretical models. Some experts changed their direction from theoretical to applied sciences, applying theoretical research to technical applications, which led to the growth of subclassifications such as genetic engineering and cell engineering. After technologies including PCR and GFP solved technological bottlenecks one after another, genetic engineering experienced a period of explosive growth.

Around the turn of the millennium, due to several phenomenal events, discussion of gene technology broke away from the scientific realm, entered into popular view, and even grabbed the world’s attention. The birth of Dolly the sheep in 1997 amazed the world, but also prompted people to ponder how technology affect living organisms. Another groundbreaking project, the Human Genome Project that began in 1990 and completed in 2003 was dedicated to identifying and mapping all the DNA on 23 pairs of chromosomes, so that men could know their own selves on a physical level. Although it laid the foundation for many fields including gene therapy, it also made the public wary of genetic privacy. The gene editing technology, CRISPR/Cas for instance, is such a double sword. On the one hand, it could improve precision in clinical medicine and promote human well-being; on the other hand, it allows men to customize the specific makeups of their own bodies as well as their descendants, closing in on playing God.

With the extraordinary advancement of biotechnology (especially genetic engineering), the importance of bioethics as a offset becomes paramount. On a molecular level, people are mainly concerned with the following aspects of bioethics: 1. health and wellness, and the prediction, diagnosis and treatment of diseases; 2. genetic privacy; 3. application of genetic modification. However, many problems do not arise from technology, but are interwoven with complex social factors such as politics and religion.Thus, for us who live in the “post-genome era”, we shouldn’t try to stop but instead guide the overwhelming advancement of technology. This is not an issue that the biology field and technology field need to face, but a task that all humanity needs to solve together.