Synthetic intelligence (AI) won’t ever grow to be a aware being as a result of an absence of intention, which is endemic to human beings and different organic creatures, based on Sandeep Nailwal — co-founder of Polygon and the open-source AI firm Sentient.
“I do not see that AI could have any vital degree of conscience,” Nailwal informed Cointelegraph in an interview, including that he doesn’t consider the doomsday state of affairs of AI turning into self-aware and taking on humanity is feasible.
The manager was crucial of the idea that consciousness emerges randomly as a result of complicated chemical interactions or processes and mentioned that whereas these processes can create complicated cells, they can not create consciousness.
As an alternative, Nailwal is worried that centralized establishments will misuse synthetic intelligence for surveillance and curtail particular person freedoms, which is why AI should be clear and democratized. Nailwal mentioned:
“That’s my core concept for a way I got here up with the concept of Sentient, that finally the worldwide AI, which may really create a borderless world, ought to be an AI that’s managed by each human being.”
The manager added that these centralized threats are why each particular person wants a customized AI that works on their behalf and is loyal to that particular particular person to guard themselves from different AIs deployed by highly effective establishments.
Sentient’s open mannequin strategy to AI vs the opaque strategy of centralized platforms. Supply: Sentient Whitepaper
Associated: OpenAI’s GPT-4.5 ‘received’t crush benchmarks’ however could be a greater pal
Decentralized AI may help stop a catastrophe earlier than it transpires
In October 2024, AI firm Anthropic launched a paper outlining eventualities the place AI may sabotage humanity and potential options to the issue.
In the end, the paper concluded that AI isn’t an instantaneous menace to humanity however may grow to be harmful sooner or later as AI fashions grow to be extra superior.
Several types of potential AI sabotage eventualities outlined within the Anthropic paper. Supply: Anthropic
David Holtzman, a former navy intelligence skilled and chief technique officer of the Naoris decentralized safety protocol, informed Cointelegraph that AI poses an enormous threat to privateness within the close to time period.
Like Nailwal, Holtzman argued that centralized establishments, together with the state, may wield AI for surveillance and that decentralization is a bulwark in opposition to AI threats.
Journal: ChatGPT set off pleased with nukes, SEGA’s 80s AI, TAO up 90%: AI Eye