As we explore the intersection of AI and manufacturing, a key question arises: Has anyone been successful in using AI to generate G-code for CNC machining or 3D printing? If so, what were the specific applications, and how did the AI perform in terms of accuracy, efficiency, and reliability compared to traditional methods? Were there any significant improvements in production speed, cost-effectiveness, or design complexity that could be directly attributed to the use of AI? Additionally, how were challenges such as validation of AI-generated G-code, handling of complex geometries, and consideration of material properties addressed? Understanding the successes and failures of previous attempts to use AI in this context can provide valuable insights and guide future developments in this promising field.
There is a big push to eliminate g-code entirely. It is a relic of the past and hasn’t adapted well to modern manufacturing techniques.
Of the “public” projects I have knowledge of, none generate g-code. All take the models and generate the pulses or the servo positions directly. It is essentially direct from CAD to cutting.
Example of what I am describing;
I’ve used chat GPT to generate some G Code. It does surprisingly well. It was able to make some basic shapes for me, and even responded to prompts about optimizing the G code for multiple paths, and some G Code nuance for specific machines.
For transparency, I have a commercial account with Open.AI and use their APIs for application development purposes. I also use Tensor and a few other older frameworks that are fading on a daily basis.
ChatGPT doesn’t really generate code but more “cut and pastes” it from examples it has stored. So it can take a shape, say a “5 point star” and it knows a rough estimate of what g-code is and it can give you a 5 point star in g-code.
What it cannot do is adaptive clearing or pocketing. It can’t do constant tooth engagement along a continuous path through 3D space.
ChatGPT is a “better Google” but it won’t be replacing anybody any time soon. It is a failure for two reasons;
-
The model memory footprint is in excess of $100,000,000 per year and it has already reached it’s theoretical maximum for concurrent hardware integration. This is that one app that does not benefit from cloud architecture. It is true that each successive model is compartmentalized (aka, the natural language model feed the intent model, that feeds the chains of models that build the response) but they are largely monolithic within each domain.
-
Large models degrade over time because humans are either honestly wrong or malevolently deceptive. Case in point was the lawyer who asked ChatGPT to write his response to a court directive and ChatGPT just invented case law to prove its point. Where did it get this phony case law? Open.AI hasn’t commented yet.
I would agree that the current domain specific models are beginning to bear fruit. Like for instance training a machine to use a specific end mill. This is what we refer to as “Edge Intelligence” which when combined with traditional rule and logic processes are truly amazing.
But, I stop short of endorsing large model systems. They are cool parlor tricks but they are friggin dangerous and scary in the hands of folks who trust them.
I can sit with you and screen share and show you that about 70% of the code (g-code, Java, Rust, C/C++, Lua) is complete garbage. If you ask it how to do a simple task, a couple line response, you get cut and pasted code from somebody else’s examples and its cool (thus my statement that it is the “better Google”). Ask more complex problems, ask where values must be carried between operations and things go sideways really quick.
Lastly, there is a 3rd failure of the system;