For the CNC quote the prediction is done using data from CAM tools, the idea is to predict faster without actually creating the toolpath? It is a very interesting approach, but what is the confidence level of the prediction ( >0.95)
Yes , this is the intention. The confidence level is dependent on the data sets available and their relevance. We have had results which are considered by the specialists as sufficiently good to make a reasonable proposal (offer).
How do you develop confidence in AI predictions and avoid hallucinations?
Hallucination is an issue with LLMs. Nearly all methods we use (POD, FFT, CVT, MLP, ...) are not concerned with this issue. We have developed procedures which allow us to sweep the learning data base and evaluate the validity of the prediction with quantifiable indicators.
Have you integrated any CNC toolpathing softwares and AI so that the AI recommends cutting process to produce the machined file (3D CAD model)?
No. Today we can predict parameter settings but not the process itself.
How do we handle inconsistencies in our datasets, obviously we can't associate one instance to represent a hole. so how do we isolate and identify the results/ remove outliers?
If the size of the database is small, outliers need be removed due to their relative influence. If database is rich, outliers will fade away. We can also clean the database beforehand of course (automatically) so that outliers are removed. This resembles filtering and need to be done with care.
Is it possible to handle decision making algorithms, that can take in lifetime inputs and adjustments?
I assume you mean real-time updating of the input/output (data base): Then yes, this is currently possible.
What is the difference between the Edges Detection of Odyssee and the PCA Algorithm - Principal component Analysis?
These two are very different. The Edge detection employs techniques (CNN, max-pooling, etc.) to extract the boundaries. PCA is a data decomposition technique (projection) which is used to reduce the dimensionality of the data.
How much data/images/models, etc. are required to train the models? How do you validate the models to know they are predicting accurately? What kind of accuracy do the models have ... 2% or 5% or 10% error?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
How much data do we need to use for training the algorithm for stresss-strain or indentation property prediction?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
For a prediction model using images, is there a min resolution needed for good predictability?
No, because we predict the same resolution as the original pictures, and no more. There are other techniques which allow the enhancement of the predicted image (via interpolation).
In general what is the size of training data recommended?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
If the size/position of design in a image varies within same size bounding box, what is the impact on predictability?
This can be identified and adjusted during the learning process.