From 600+ conversations with the world’s leading thinkers.
When we create digital faces, they have to look and behave organically- they have to trigger that part of your brain that starts to think about what that 'person' would be like.
Behind every free‑will decision there must be comprehension and intention—and that's where consciousness comes in: the capacity to understand the meaning of symbols. In science, 'information' refers only to the probability of symbols occurring, not to their meaning. Thus science's definition of information discards meaning from reality, but for conscious beings, meaning—not the symbol itself—is what truly matters.
The brain is definitely not doing computation in the purest sense. We are not crunching numbers in binary ones and zeros in our heads. A more important question is: what are your inputs, what output do you want, and how intelligently can the system get from one to the other?
My fear has never been the machines waking up and deciding to do away with us, but rather that we- in our own bone headed way- deploy systems inappropriately, or without thinking through the unintended consequences that may occur.
The cardinal rule in academic research is to base your assertions on citable evidence rather than conjecture. This principle sets Perplexity apart from ChatGPT, which has the freedom to generate content without such constraints. Perplexity, by design, is restricted to sourcing information directly from the web, eschewing any reliance on pre-existing knowledge within the model.
AI has been an area of technology for many decades, but the advances of the past five-years show us why this is one of the major technology events of the last several centuries.
While people have been worried about AI being embedded in humanoid robots from the science fiction world, our lives have been shaped and influenced by AI which makes tens of billions of decisions each day about what we see, and how we communicate.
I remain a technical optimist… the problem is not artificial intelligence, it's natural stupidity.
Historically, we've viewed the human mind as the paramount problem solver. Yet, is it still our ally, or has it become our adversary? I believe we're at a juncture where the human mind is shifting towards the latter.
Historically, humans have evolved to be wary of the unfamiliar—a survival instinct that's served us well. Thus, the age-old "Frankenstein" narrative, wherein we birth powerful entities beyond our understanding or control, resonates deeply with our intrinsic apprehensions.
It's the ultimate invention—the last one we'll ever need to make—because once we have AI that is generally intelligent and then superintelligent, it will do the inventing far better than we can. In that sense, it's a handing over of the baton.
The ultimate goal is to be in a state of flow with machines. Think about people working with horses, or herding cattle with a dog, they are examples of interactions with other intelligent creatures in a way which is fluid and allows us to achieve something we couldn't do ourselves.