Analyzing AGI: What would the world of artificial general intelligence look like?

My humble contention: A big reason why opinion polls suggest considerable concern about recent AI advances is that our culture has produced few visions of a positive future with supersmart computers. So we all wonder: What will our society be like in a world where computers can do much of what we currently do (weak artificial general intelligence), can do all of what currently do (strong AGI), and, finally, can do what we can’t even imagine doing (artificial superintelligence)? Now whether each of those performance levels is possible is another question — and would eventually require big advances in robotics, as well as AI. (Some would consider strong, human-level AGI to be superintelligence.)

Συνέχεια εδώ


Superintelligence: OpenAI Says We Have 10 Years to Prepare

 OpenAI was born in 2015, eight years ago, under a bold premise: we can—and will—build artificial general intelligence (AGI). And just as bold has been the determination of the startup’s founders, who have spent these eight years trying to architect the road toward that goal. They may have gone on an off-ramp, but maybe not. No one knows.

It’s funny because not only do AI people love making forecasts (me included), the field as a whole is unable to assess in hindsight whether those forecasts materialized or not. Is the transformer an AGI milestone? Is GPT the breakthrough we were waiting for? Is deep learning the ultimate AI paradigm? We should know by now but we don’t. The lack of consensus among experts is revealing evidence.

Συνέχεια εδώ

Σχετικά Άρθρα