Foundational problem with AI

I know, bit late to the whole hype, but it seems there is a fundamental problem with AI, for any question there exist infinitely many correct answers? Even absurdly simple questions like ‘2+2=’ have infinitely many correct solutions/answers (1+3, 8-4, 2*2, 9-5…), same with coding type prompts, just add a if(0); to your correct answer. There is no right answer. Sure you can filter out the simplest examples (like above), but at the end of the day you’re still left with finite context for answer/reply, yet people expect to eventually end up with infinite (AGI)? (the magic step AGI turns the all_knowing_predicting switch to true and recompiles itself)
It seems people have watched too many videos where AI of IQ x tweaks a parameter and ends up being x+1 and then up to infinity (Humans are limited by DNA, this thing can do it in a microsecond…). Do we have any such believers on the forum? How exactly can such switch happen? Who can apply IQ number to a smarter than smartest human machine? Machines will write machine IQ tests? Seems like recipe for garbage-in’nout. It’s like kabalah thinking: with correct parameters and pronounciations you’re gonna get god’s real name, just tweak ones and zeros

Well, I belong firmly in the sceptic camp but have spent a fair amount of time and money trying to get a better idea of the current landscape. So, not just the LLM hype but where are RNNs today, how might hypbrids (Visual + Textual) perform. Long and short, you can, with a great deal of initial effort (data massage), get machines to do a LOT of the boring work when it comes, for instance, to classification and extraction of data from documents. In some industrial settings (for instance, blueprints / schematics) this is quite advanced and goes from design to simulation phases of product design (or safety evaluation and the like).

On the other hand, all the hype around fancy code completion is GIGO, as you suggest. I think there is a use case, but can’t find it in all my pasta factory code :slight_smile:

1 Like

Garbage in, garbage out. Meaning any AI can only be so good/useful as the data it’s been fed with, and I see a big, big problem right there.

Apart from that, yes, it’s a hype.

I still remember the Virtual Reality hype of the 90s. In the end all it meant was better graphics.

What pisses me off about ChatGPT etc. is that people have no clue how wasteful it is. Some are using it as a search engine. As if search engines themselves weren’t wasteful enough.

And equally clueless politicians declare that “we will do more to support AI”.

The EU is just about to get to grips with Alphabet & Meta & Co., and they’re eager to roll in the next pile of garbage that’ll have to be sorted out in a decade or so :person_facepalming:


Yeah, spot on. I did a bunch of test training on rental as well as personal (4090) gpu. It wastes enormous amounts of energy. For very little return. As I said earlier, I do know of a few examples in industry where it makes sense. 100K collection of schematics in drawers (automobile context) and the aid of machine learning makes sense again. But, even that is OCR++ :slight_smile:

1 Like