Economics professor Alex Tabarroca of George Mason said Anthropic’s AI model “Claude” had passed the law and economics exam. Albeit at the limit of their abilities, the examiners noted the low level of AI knowledge, but still gave him a positive rating for his ability to argue his point of view. In this regard, Claude often visits ChatGPT – although he is inferior to him in a number of other parameters.
We are seeing an explosive growth in the development and use of AI in applied research – but at the same time, the entire industry, in fact, is still marking time, inconsistent which path of development to choose. Claude used strict requirements because he built on a “constitutional evaluation scheme” that forbids the use of observable data in informing. Simply put, AI does not operate with the concepts of “good”, “bad” and “forbidden”, but simply uses the entire set of available data.
AIs trained in “safe schemes” avoid talking on controversial topics, which makes them a subject for wide application, but useless in solving complex and controversial problems. Claude, on the contrary, is ready to formulate his opinion on almost any topic, but at the same time he can easily be interested if he considers the request stupid or provocative. Claude is more like an artificial person whereas ChatGPT is a typical intellectual tool.
Source: Tech Cult
