The study, published in the journal Philosophical Studies, examines the conditions necessary for consciousness to emerge and examines the possibility of modern artificial intelligence systems achieving this state.
Wiese distinguishes two approaches: one that assesses the likelihood that existing AI systems will achieve consciousness, and the other that assesses what types of systems are unlikely to achieve consciousness. Wiese’s research focuses on the latter approach, aiming to minimize the risk of accidentally creating artificial consciousness and preventing deception by AI systems that may appear conscious but are not.
Central to Wiese’s argument is the “free energy” principle proposed by neuroscientist Karl Friston, which suggests that the processes that support the self-organization of a living organism may be similar to those required for consciousness.
One of the most important differences Wiese identified is the “causal” nature of the brain and the computer. Unlike computers, which use separate memory and processing units to process data, it is the interconnected areas of the brain that can play a critical role in consciousness.
Therefore, the appearance of consciousness in computers is questionable.
Source: Ferra
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.