ChatGPT Cheat Sheet

Our forum has some interesting ChatGPT posts. I found this pdf useful as a helicopter recon over examples of ChatGPT’s capabilities:

As an enduring computer scientist I remain deeply skeptical about inflated claims for new AI products. Situationally, Big-Tech hasn’t had a “big bang” for a while and they’re all hurting financially, no new thing that we all “must have”. Hence, Microsoft wants a slice of Google’s pie, vice versa, etc.

Looking through the examples of what ChatGPT can do, it has a set of functions that can be combinatorially arranged and run, based on computational-linguistics parsing analysis of the natural language input from a user. Nothing of what it does is new, most of the output functionality exemplified in the pdf reminds me of compsci undergraduate programming assignments. I suppose it’s the hooking of its internal semantic structures (and their thematic-breadth) to the users’ natural language inputs, to produce an output that aligns with the user’s expectations, that’s a bit (compiler-ishly) novel.

Regulatory bodies in some jurisdictions have asked tech corporations to tone down their claims about the AI capabilities they’ve recently grafted into their products.

Maybe right or wrong or other, interesting times, all input welcome,
cheerio,
mitch

2 Likes

Having been working on “AI” from way before it was trendy, the current hype cycle worries me as well. Specifically, as this cycle fails to deliver the hype or stock market expectations, in an ecologically or even fiscally sustainable manner then the trough of disappointment will inevitably follow.

However, under the hype, there have been real achievements that have the potential to make stunning impacts on lives. Especially those with challenges such as the Tangata Whaikaha, the socially deprived in every society or those from developing countries.

The risk is these achievements and the human potential they support will be lost as the Anglo-American capitalist hype train becomes outraged or just moves on in search of short-term profits for the few.

So yes, maths is self-referential meaning there are no absolute truths as illustrated by Godel’s incompleteness theorem or the Continuum hypothesis. Classic logic is deductive, so analysis is inevitably recursive. Most AI models predict the future on the basis of misunderstandings of the past. These challenges are all true and potentially fatal for a given model.

We just need to wind back the hype and understand the strengths of AI as well as its weaknesses so that we can help support the most vulnerable and deprived in our society. Throwing out the baby in the bath water is not useful.

1 Like