JustCode 源码随笔-06

Attempt at creating AI surpass ChatGPT – 03
尝试创造能超越ChatGPT的人工智能 – 03

//++中文版++//

之前的文章提到过,笔者认为的ChatGPT的明显缺点一个是答案准确性不完美,另一个是开发成本太高。这篇文章中,笔者浅谈针对这两个缺点的解决方案,以及如何应用到“小鸟虚拟学者”项目里。

针对第一个缺点,笔者认为有一个简单有效的解决办法:不向用户提供任何喂养数据源。简单阐释这种方法,你看ChatGPT, 它成型前是先通过海量数据训练出来的,这些海量数据也是开发团队自己搜集的,也就是说,ChatGPT是喂养好的成品。因此,ChatGPT回答答案的质量与喂养数据源的质量是脱离不了关系的,这也使得公众对ChatGPT答案合理性的监察有了依据。

如果我不提供喂养数据源,我让用户自己对自己的喂养数据源负责,那么我是没必要对”小鸟虚拟学者”答案的准确性做任何的保证的,因为我把数据源权利移交给用户自身了,理论上,“小鸟虚拟学者”的安全性将取决于用户自身,而与开发者无关。

也就是说,用户拿到的”小鸟虚拟学者”,将是一张白纸,这位虚拟学者能达到什么样的高度,主要取决于用户自己的培养。

针对第二个缺点,笔者也是有思路的:首先,解决第一个缺点的同时,也顺带大大降低了开发者的成本,数据源训练方面的成本可以大幅减少。同时,由于”小鸟虚拟学者”的高度将取决于用户的培养努力,也就是说,一千个用户中有一千种水平的”小鸟虚拟学者”,那么,如何将“小鸟虚拟学者”的成长性发挥到极致才是重中之重。

而智慧的成长,离不开严谨的自我批判,在严谨的自我批判下,机器的智慧才能螺旋式地向上成长。因此,机器的自我批判模块将是开发重点,将遵循严谨的逻辑批判方法进行开发,开发算法也务必十分严谨。同时,这个模块或许会由开发者自己喂养数据,目的是将全世界最优质的的自我批判方法应用到机器中。

同时,基于自我批判模块的功能,允许用户和机器根据用户选择的问题进行辩论,如果用户的逻辑能打败机器的逻辑,那么机器将采用用户的逻辑,这也是用户培养的努力的一部分。

同时,用户喂养的功能将完全采用离线模式,保证用户100%对自己的机器负责。

总之,按照这样改进的思路开发,”小鸟虚拟学者”还是有希望超越ChatGPT的。

//++English Version++//

In the previous article, the author discussed two significant shortcomings of ChatGPT: imperfect accuracy of answers and high development costs. In this article, the author briefly discusses solutions to address these two issues and how they can be applied to the “Little Bird Virtual Scholar” project.

To address the first shortcoming, the author proposes a simple and effective solution: not providing any pre-fed data sources to the users. The author explains that ChatGPT, before its formation, was trained on a massive amount of data collected by the development team. This means that the quality of ChatGPT’s answers is dependent on the quality of the fed data sources, which also allows the public to scrutinize the reasonableness of ChatGPT’s answers.

By not providing pre-fed data sources and allowing users to be responsible for their own data sources, the author believes that there is no need to guarantee the accuracy of answers from the “Little Bird Virtual Scholar.” The responsibility for the data sources is transferred to the users themselves. Theoretically, the security of the “Little Bird Virtual Scholar” will depend on the users, rather than the developers.

In other words, the “Little Bird Virtual Scholar” that users receive will be a blank slate, and the potential of this virtual scholar will mainly rely on the efforts and cultivation of the users themselves.

Regarding the second shortcoming, the author also has ideas. By addressing the first shortcoming, the development costs for data source training can be significantly reduced. Additionally, since the growth of the “Little Bird Virtual Scholar” will depend on the users’ efforts, meaning that there will be a thousand different levels of the scholar among a thousand users, maximizing the growth potential of the “Little Bird Virtual Scholar” becomes crucial.

And the growth of intelligence cannot be achieved without rigorous self-criticism. It is through rigorous self-criticism that the machine’s intelligence can spiral upward. Therefore, the self-criticism module of the machine will be a key focus, developed following rigorous logical criticism methods and algorithms. This module may be fed with data by the developers themselves to apply the world’s highest-quality self-criticism methods to the machine.

Furthermore, based on the functionality of the self-criticism module, users and the machine will engage in debates based on the questions selected by the users. If the user’s logic can defeat the machine’s logic, the machine will adopt the user’s logic. This is also part of the user’s cultivation efforts.

Additionally, the user feeding functionality will be completely offline, ensuring that users are 100% responsible for their own machine.

In conclusion, by following this improvement approach, there is hope for the “Little Bird Virtual Scholar” to surpass ChatGPT.