Home Artists Posts Import Register

Content

The issue of utilizing information technologies to undermine democratic systems in the West has long been a cause for concern in the age of the internet. Totalitarian governments, like the CCP, have historically employed a range of tactics to manipulate information and advance their ideologies. These tactics encompass external propaganda campaigns that exploit vulnerabilities within democratic systems, internal propaganda centered around national rejuvenation, and a combination of internal and external propaganda aimed at intimidating foreign opposition while fueling external animosity.   

Though the CCP has well demonstrated adeptness in harnessing science and technology to achieve its ambition, the current state of AI heavily relies on human development. While some strong-narrow AIs such as AlphaGo can autonomously learn and enhance problem-solving abilities within specific domains, raising concerns over their seemingly uncontrolled power, we have yet to reach that stage in artificial general intelligence (AGI). Therefore, at least in this current stage, I believe that the core issue lies in the ideologies and crisis awareness of the developers themselves. It is worth mentioning that the responses generated by ChatGPT can differ significantly between its general user version and developer version.  

The capability of AI largely hinges on the quality and relevance of the data used for training. Employing data filtering and weighting techniques before feeding them into AI engines can refine the information that AI algorithms learn from. This helps address concerns related to manipulation and enhances the effectiveness of the training process, improving the overall performance and reliability of AI models by reducing the potential impact of skewed or unrepresentative data. Moreover, these preprocessing steps assist in mitigating biases present in the training data, ensuring fair and equitable decision-making by AI systems.   

Mistral AI, a French company specializing in AI with open large language models, adopts this approach to handle big data. The company aims to reduce computational cost and infrastructure expenses while maintaining performance on par with other big-data-oriented competitors. As stated on its company’s webpage, the purpose of its open models is to serve as valuable safeguards against the misuse of generative AI, enabling public institutions and private companies to conduct audits on generative systems to identify flaws and detect instances of improper use of generative models.  

Western governments have grown increasingly cognizant of the CCP's ambitions to surpass them in political, economic, and military realms. Consequently, when formulating regulations to govern AI development, one primary objective is to minimize the potential harm stemming from the CCP's exploitation of AI to subvert democratic systems.   

While the US has taken a significant step in the tech battle by cutting off China's access to AI chips, it is clear that more actions are needed. Governments face the challenge of creating a robust and supportive environment that effectively balances AI development with concerns over copyright infringement and other issues. This balance is crucial for empowering technological advancements as a defense against totalitarian regimes. Although easier said than done, government intervention is imperative to achieve this goal and establish the necessary conditions for progress. It is not feasible to entirely eliminate all vulnerabilities, but the measures aimed to curb the CCP's antagonistic actions will be set to an unprecedented level of strictness, far surpassing that experienced during the era of Sino-Western friendship.

▶️ 鄒崇銘:人工智能革命,只會把人類帶到「敵托邦」
https://www.youtube.com/watch?v=7RLCcGctplQ

Files

Comments

No comments found for this post.