One man's look at generative artificial intelligence

This article by Dan Polansky looks at what is called generative artificial intelligence (GenAI) and large language models (LLM). Examples include ChatGPT, Gemini, Copilot and LLaMA.

There are benefits, there are risks and there are costs.

An immediately obvious risk is inaccuracy. GenAI can easily generate inaccurate/untrue statements. This can be mitigated by awareness of the user. After all, users need to learn critical attitute toward sources that they read anyway; GenAI is far from the only offender as for being source of untrue statements.

A benefit is the use of GenAI as a source of ideas to be indepedently examined or verified. One use of this is for initial statement verification/probing: one can e.g. ask 'Is the following accurate: "Adjectives are never capitalized in English."' and have the statement corrected. However, the above mentioned risk really needs an emphasis. It seems all too easy and tempting to trust the answer without independent verification.

One may wonder whether GenAI can be used as a form of psychotherapy. A remarkable feature is the limitless patience shown in answering questions, even stupid or annoying questions. One can practice asking questions, improving formulations of questions, thinking critically about the answers, etc.

GenAI can be charged to contribute to global climatic change via electricity use. The ethics of this aspect is for each prospective user to consider; governments have not prohibited GenAI for this reason and seem unlikely to do so, provided they did not for the most part even prohibitd cryptocurrency/cryptoasset mining. A serious analysis of this aspect would include a quantitative comparison of other dispensable uses of energy such as video streaming.

GenAI can also draw/paint/create images based on verbal description. For this use, the label large language model seems misleading or inaccurate, on the face of it.

Interestingly, GenAI seems rather inapt in even trivial calculation, as per Edmund Weitz video.

Tools providing complementary facilities to GenAI are e.g. Wolfram Alpha and Desmos Calculator. It would be interesting to see what would happen if one could somehow integrate genAI with e.g. Wolfram Alpha, that is, when genAI would delegate computational assignments to Wolfram Alpha (or equivalent).

One can ask whether the label generative artificial intelligence is appropriate. That is, one can ask whether this really is an intelligence, one that is artificial and generative. Very superficially, something suggestive of human verbal intelligence is there. Moreover, given the term artificial general intelligence (AGI), we may use the term artificial intelligence much more broadly to include specialized problem/task solving, and then, chess playing would be artificial intelligence. Generative artificial intelligence may even approach passing the Turing test. Paradoxically, the responses from GenAI are too fast to be human, which betrays the artificial origin. Be it as it may, GenAI does not really seem to understand what it is saying; but then, as a sinister note, too many humans speak as if they did not understand what they are saying either. And then, one may wonder whether part of the human brain does not really implement something like GenAI (such an idea is found e.g. here).

As for the mechanism of function, sources seem to indicate that textual GenAI just tries to determine the next word given the sequence of words (using artificial neural networks). I struggle to find this plausible and to understand how that principle could possibly produce the kind of behavior that we see, but what do I know. I would find it much more plausible if somewhether in the guts of textual GenAI, there would be something like OpenCyc ontology.

See also

edit

Further reading

edit