End of the line for coding LLMs will do a better job. How do I pay my mortgage?

4 months ago 5

Daniel Payne

I don’t think I will be writing React front-ends in four years time. The LLMs will do a faster & better job than I can do. So how do I pay my mortgage.

I pivot to flow programming; the robots will need orchestrating and the on-premises LLM Agents will need feeding with data. The skills I have in designing, problem-solving and debugging are an easy transfer.

To be honest it like going back in time for me, I started as a systems-analysis back in the 1980’s. I worked out how to solve problems using the new-fanged databases and handed it over for someone else to code. However I ended up coding SQL and then UI, as I could do it quicker than delegating to others. I am now going back to delegating, but this times it’s to machines not people.

This is what I have done to prepare for the transfer, it might help you if you are old-school coder and wondering what to do.

I have been using flow programming for many years now, I used to use SQLServer Development Studio for my ETL (Extract Transform Load) tasks.

But ever since I ditched everything Microsoft including SQL Server. I have been using Node-Red as my go-to tool for managing data imports into PostgreSQL & SQLite.

Why did I ditch Microsoft : vendor lock-in : FoxPro, VB, Silverlight, I have had the rug pulled from under my feet too many times. Open source is the only way to go.

It also turns out that Node-Red is really good at building a complex UI, the sort of UI that needs to be based around real-time data and multiple inputs from different sources. I do a lot of these Dealer Dashboards used in big banks, they are really difficult and cost millions of pounds to make. As a learning exercise I created a basic trading platform over a weekend. I have created this trading app many times over the years, to test new tech as it comes along. However this is the first time I have got it close to using real-time data, I normally fake it.

This is what the UI looks like, it’s not very pretty, that’s me being quick not a limitation of Node-Red’s dashboard (it uses Vue / Vuetify). I have built some well designed pixel perfect websites using the Vuetify library.

The important thing to note is that the graph at the bottom responds to incoming real-time prices, user clicking around on the buttons and some complex maths calculating option prices to draw on the screen.

I have built this with React and it is a bit of a nightmare coordinating everything, but it can be done. However when I built this with Node-Red I started to see how flow programming can help with the upcoming increases in complexity.

Interconnected functions

Node-red is proper full stack engineering, so the first thing I need to do is get all my maths functions ready. The genus of flow programming is I can see how they are interconnected, and which functions rely on what. I can see straight away my calculateEuropeanOptionPrice needs a discountFactor, etc.

Compare to this to the view I have when I am building the same thing with React

The green dots in the flow are interesting, they show the value of a constant or the result of unit tests for a function.

This is not part of the built in node-red, it’s just how I decided to build my nodes, I create nodes to build a function and tests it in the same place, then pop the result into the node’s status message.

All the functions get rebuilt and tested on each deploy, and when something fails, I see where the failure is.

Data Stage

Once all the function creation paths have completed, I need to load pricing data. Node-red has connectors too loads of different data sources, but for this project I load about 50 mb of text files into memory as a cache. The state of each users game is then loaded from backup files and put into a shared context. It is so easy to backup and restore this context, and keep everything going between restarts.

Then the last stage is to calculate each players position (how much money they have) and move them to the next day. This just cycles round and recalculates every second.

This is really different to how we build UI in code and React. It’s like server & client components on steroids.

The Red nodes run on the server and the Blue nodes run in the client. They send messages between each other using web-sockets, this is lightning fast. Incoming on the left and outgoing on the right.

If we focus in on the Prices Chart you will see it needs

  • Indicative Estimate from a Current Price
  • Current Position
  • Historic Prices

as well as the output from user choices

  • trade settings
  • Graph Range
  • Graph Scale (show market price or profit/loss at that price)

All these messages are sent at different times, but the Prices Chart is able to coordinate all the incoming messages and draw an image. The coordination bit did take a few lines of code, but that’s really the only place much coding was done for the UI.

Why are these nodes and messages such a big thing;

it’s a preparation for AI.

In a lot of up and coming systems these nodes are going to be calls out to

  • LLMs
  • MCP servers (Model Context Protocol) an API for doing things
  • LangChain wrappers. Langchain will really fry your brain, it did mine.

For node-red, there is a cloud service called FlowFuse. This project has all your scaling needs covered, from thousands of different input streams, to managing authentication & authorization at scale.

If you would prefer something different the n8n is a newer flow programming cloud experience.

But my preference for flow programming with AI is Flowise AI.

For this evaluation, I thought I would down load the FBI declassified and public files on the Epstein case and ask it to extract the people and the events that interconnect them.

About 7 years ago I used to work for a law firm when this sort of thing was a multi person multi year project. I did this in a couple of hours.

The first thing is to get the data in to the correct state, not easy as this is typical real world data. The download is a number of PDF files that contain mostly scanned images and a bit of text. A lot of the text is completely irreverent to the questions I will ask.

This is exactly the types of systems we will be asked to build in the future. I know Palantir has this use-case covered, but there are going to be a large number of smaller organisations that are going to need the same sort of introspection into their private files.

I took all these documents processed them and asked this question in a chat bot. I blurred out the names as I don’t want to get sued, plus I live in England where 13,200 people in 2024 were arrested for saying the wrong thing.

make a list of all the named people who have a relationship with Epstein, and present these results in a table, with name and relationship

This sort of inquiry translates into the business world. As an example company A wants to take over company B. There are 200,000 pages of contracts that company B has entered into, these have all been scanned in by their legal department. Is there anything in there that might stop the takeover. This is a massive job with people, but AI can do this sort of thing quite easily.

I used Flowise to generate the chat-bot, that is then used to answer the question.

It only has a six nodes, however there are a few steps that I needed to go through to get the data ready for this flow.

A lot of this can be done inside Flowise or Node-red.

However as I have images and I am familiar with the process using Linux, this was done as a number of system calls out to specialist packages running on my Linux box. This is old school Character Recognition with tools that have been around for years. All that can now be done with LLMs.

I used node-red to process my documents to get me to the text stage, then let Flowise’s built in document store do the last three steps.

Once we have a pile of text characters we need to do three things in Flowise to prep the text for AI.

  • Chunking, break into little bits for processing & retrieval
  • Convert to vector, a vector is a big number that represents the meaning of a word or a sentence.
  • Store in Vector DB. This has a special index that can compare multi dimensional numbers.

The LLM is just a program that takes a piece of text and returns another piece of text. This is what it looks like running on my machine in a terminal window.

I like to run everything locally, I use Ollama to run all my LLMs. I have found doing things locally has given me a deeper insight to models and its easy to play around to see how they handle specific jobs.

For reference my dev box runs 2 secondhand Xeon chips, 40 cores between them with 128 of ram and a couple of secondhand Nvida 4000 cards, all from ebay. I can run all this locally, not super fast, but ok for learning. About £600/$800. You can do all of this with cloud services for about £100/$150

We will use Flowise to create a piece of text to put into the OLLama LLM, but the text that gets finally processed won’t just be this question

  • make a list of all the named people who have a relationship with Epstein, and present these results in a table, with name and relationship

It will also have the following bits of text added

  • # Persona : you are a legal assistant
  • # sources: Use the retriever tool Epstine as your primary source of information
  • #88. Characters: 986 2 = & x* \rmer itiend claimed Epstein 2 g-ed out of a promise to reim- ‘Surse him hundreds of thousands of dollars after their failed investment in Texas oil wells, A judge decided Epstein owed him nothing.

These additional bits of text, that you don’t see in the chat-bot prompt, are the things that make the interaction effective.

If we look at the flow, the “Tool Agent” is the code running the UI taking the key-presses.

It then passes that text to the “Retriever Tool” to do a search on the documents and add the results to the prompt. A query and result set might look something like this. The LLM decides on the query text.

The tool agent then adds the text from the “Chat Prompt Template” and any prior prompts and responses using the “Buffer Window Memory”. It then sends a much larger prompt into the LLM. This gives us a result that looks much more like a knowledgeable person has generated it. The “Tool Agent” then formats nicely any output it receives and draws it on screen.

Sorry “no conscious” just a very complicated maths equation and a lot of text, most of which looks like gibberish.

If like me you know JavaScript, Node and React; this is what I have started to learn

  • Node-red & FlowFuse for a basic introduction to flow programming.
  • n8n if you prefer a more modern interface for flow programming.
  • Flowise to start building information management systems with LLMs.
  • Langchain for when you understand the concepts and want to start building high performance systems.

These are probably the two most important videos that woke me up to the fact that traditional programming is over.

Rilley’s video “Complete guide to AI”, has a small section on vib coding, where he built some React, that was just as good as anything I could create, but in minutes instead of days.

Leon’s video “FlowiseAI Masterclass” is a good introduction to the nuts and bolts of putting this all together.

Between them its a almost 4 hours of video, but worth every minute.

Read Entire Article