logo
The definitive guide to Britain's success in the twenty-first century

 

 

 

 

Home Politics and Governance economy and business energy and transport education health and welfare Philosophies

About CST

Clickable
SiteMap

The way forward

25 Year Planning

Marginal Costing

Exports

Debt & Economics

Governance

Education

Super Fast Track

 

 

 

 

 

 

 

It has taken a while...

CST has been reviewing AI systems and especially LLMs for sometime now.  It is our belief that these systems do particular jobs very well for a range of tasks, especially those linked to language.  But they are much more limited than it first seems.   Users such as Blake Lemoine (Google) who decided that the AI he was working on should have rights as a thinking entity, had been fooled by the LLM into thinking it may be sentient. 

CST has also had many seemingly sensible and sometimes truly philosophical conversations with Claude and ChatGPT.  These type of conversations can sometimes confuse the user into thinking that there may be an internal dialogue going on within the LLM rather than just a pattern matching process.  Furthermore, the companies behind these LLMs use programs that configure the output so that they mirror users interests and language.  This adds to this confusion - as the LLM is good at language and therefore accurate at this mirroring task.

AI’s are good at specific things, and very poor at other seemingly simple tasks.  LLM’s have no understanding of real world dynamics or what is or is not possible in the real world.  This is because unlike animals who learn from experiencing actions in the real world, LLM’s just acquire a vast amount of text information from what has been written from the internet.  LLM’s are simply pattern matching, they are mathematical machines.

Confusion arises because LLM’s seem to be able to hold a very high-level conversation on many subjects by selecting ideas and creating what seem like ‘new ideas’ from their huge reservoir of text patterns learnt from billions of text examples.

CST has been reading a book called the Thousand Brains by a researcher named Jeffrey Hawkins.  This is a fascinating book that describes the current thinking on how the human brain is organised to create our thought processes and create our view of the world around us.

It seems clear from this work, some of which is now reasonably well proven, that current LLMs do not have the internal processes within their large data systems to carry out similar thought processes.  There is some current research testing new types of AI structures that may move the AI’s along the path of building some sort of real-world understanding.  But this current research has not yet been seen to provide significant improvements.  Until this happens, CST’s view is that the current AI / LLM hype is somewhat overblown.

To power our idea of the Smart Robot, humans do need a different type of architecture to allow autonomous machines to understand the world around them.  This today still looks some years away.

Frameworks for work-based projects:

If you ask about anything that is already known – eg how to mend a bicycle puncture – the LMM will provide an accurate set of instructions from the many written articles and the many pictures that it has seen.   You can lead the LMM through a series of knowledge steps to attain a good understanding to meet your task – provided it is already known and well understood within the training data – which includes many articles from the internet world-wide.

However, if you ask a similar question, say about how you might create a new type of fusion drive that can take you to Mars, it will likely provide a very good new idea that sounds ever so plausible – but this will be complete rubbish.  Great for a science fiction book, but dangerously bad for a real research project.

The difficulty is knowing when the LMM is working within its knowledge base or working as a creative engine.

What we need is a framework that defines how the LMM will comply with our particular work-flow.

Lets take an example;

We need to define the parameters of a new complex project to run in our research lab.  There are many unknowns and we cannot assume that the LMM knows the answers.  So how do we go about limiting the LMM to provide accurate existing knowledge to help us define our project planning?

If we just ask questions, then the LMM will provide what look like useful answers.  It may tell us of a particular path that seems very useful for our work, yet when we sit back and evaluate the answers provided are likely to find that they contain unworkable processes and completely made-up ideas.

We need to frame the LMM’s output by spelling-out the parameters that the LMM should work within:

Projects

Modern LMM’s have project folders that act as a form of memory that can be used effectively for helping you work with LMM’s.  These memorised projects are becoming a very important part of efficiently harnessing an LMM to provide good output for your work.

Each project is likely to have a particular set of frameworks.  You can keep a library of these for re-use.  Here is an example for editing a training document:

To help your workflow, you may setup several ‘projects’ that define specific type of output, eg:

Each AI system is likely to have different ways of utilising ‘projects’.  Claude has a projects folder while currently ChatGPT now also remembers a project name that you provide and has a project folder.

 

CST

 

 

 

 

AI’s & LLM’s – Why they are so good at confusing us

CST finally knows the answer...

(Also...Download CST's PDF Overview on LLM's)