On-Demand Webinar Series:
Navigating the AI/ML Landscape in Modern Engineering
Exploring the Fusion of AI/ML and Modern Engineering
1. Understanding AI/ML: Predictive vs Generative Models
-
Download
-
Q&AWhat allowed you to generate a surrogate (for the crash simulation use case) using so few training simulations (9)? Was that a physics-informed surrogate?
The first trick is in the correct choice of the sampling points. They should be representative of physical behaviour changes. The second one is to introduce some sort of physics based contents. This could be done by PINNs (or MINNs), SVD or FFT.
How does the real-time prediction performance of ODYSSEE when it is combined with Cradle CFD?
Firstly we need to note that contrary to Cradle ODYSSEE provides a surrogate model and not the resolution of the PDE itself. Apart from this limitation, ODYSSEE can be used for CFD problems as it is for structural mechanics problems. The same procedures hold for any PDE (mechanics, thermal, fluide, electromagnetism, etc.)
Have you done any ML work with CFD results?
Yes, ODYSSEE can be used for CFD problems as it is for structural mechanics problems. The same procedures hold for any PDE (mechanics, thermal, fluide, electromagnetism, etc.)
What is the source for Hexagons LLM for CAE? Is it Hexagon generated or you had support from users of hexagon products?
Sources are multiple but we don't develop and LLM solution.
Is ODYSSEE part of MSCOne?
Yes.
Are there any Multi Body Dynamics and NVH application examples?
Yes, we have many of them. I'll try to present one or two during the next sessions.
How should be the approach to implement ML in CAE case studies?
1. Start by building up a model (in any form).
2. Use sampling (Design of Experiments) to create many versions of it representing the variations caused by model variables. Import output of the DOE into ODYSSEE. Follow the online available trainings or contact us for training for ODYSSEE CAE module.
Does Odyssee Offer some sort of "one-shot" capabilities or does the net have to be retrained completely for each new problem or problem class (considering the replacement of classical FEM simulations)?
The training is done only once for one set of variables. Any study is allowed within the boundaries of those variables. Any modification of the variable (parameters) set requires a new training.
My understanding is that the software will have a predictive behavior, but the software prediction is based on other simulation. Wouldn't it be prudent for the predictive software to use real-life video for training?
These are two different engineering topics or methods: Learning from models or learning from physical experimentations. Indeed ODYSSEE can do both, learning from computer output (digitalized) or sensor based images or sound (not yet video's). Both are useful and equally necessary for a realistic prediction. They are complementary.
Where does the Data come from to build models? How much data is needed? In other words how many real life tests/real life data do you need to tokenize CAE Models vs Mechanical Behavior?
It often comes from CAE (CAD, MESH, Model, etc.) and other specifications. The models are often may be FEM, FVM, ... and other data come from expertise, product definition, quality issues, etc. In general, the amount of data sets (simulations) needed is (n+1), n being the number of model variables. We often propose 2n+1, up to n^2 to start with and then further refine the model if needed using adaptive sampling solutions in ODYSSEE.
Do you have some example about manufacturing process optimization... for example in sheet forming process.
Yes. Please contact kambiz.kayvantash@hexagon.com directly.
Have you developed a reliable link between CFD software and ODYSSEEE?
Yes. Please contact kambiz.kayvantash@hexagon.com directly.
I read that the statistics of LLM models are based on a group of tokens and not on individual tokens. Is what I read correct?
Your interpretation is correct. An LLM will utilize all available tokens on a query and generate the next most likely token, then repeating with again all tokens (now grown by one), predict the next and repeat this process until the last.
-
Download
-
Q&AFor the CNC quote the prediction is done using data from CAM tools, the idea is to predict faster without actually creating the toolpath? It is a very interesting approach, but what is the confidence level of the prediction ( >0.95)
Yes , this is the intention. The confidence level is dependent on the data sets available and their relevance. We have had results which are considered by the specialists as sufficiently good to make a reasonable proposal (offer).
How do you develop confidence in AI predictions and avoid hallucinations?
Hallucination is an issue with LLMs. Nearly all methods we use (POD, FFT, CVT, MLP, ...) are not concerned with this issue. We have developed procedures which allow us to sweep the learning data base and evaluate the validity of the prediction with quantifiable indicators.
Have you integrated any CNC toolpathing softwares and AI so that the AI recommends cutting process to produce the machined file (3D CAD model)?
No. Today we can predict parameter settings but not the process itself.
How do we handle inconsistencies in our datasets, obviously we can't associate one instance to represent a hole. so how do we isolate and identify the results/ remove outliers?
If the size of the database is small, outliers need be removed due to their relative influence. If database is rich, outliers will fade away. We can also clean the database beforehand of course (automatically) so that outliers are removed. This resembles filtering and need to be done with care.
Is it possible to handle decision making algorithms, that can take in lifetime inputs and adjustments?
I assume you mean real-time updating of the input/output (data base): Then yes, this is currently possible.
What is the difference between the Edges Detection of Odyssee and the PCA Algorithm - Principal component Analysis?
These two are very different. The Edge detection employs techniques (CNN, max-pooling, etc.) to extract the boundaries. PCA is a data decomposition technique (projection) which is used to reduce the dimensionality of the data.
How much data/images/models, etc. are required to train the models? How do you validate the models to know they are predicting accurately? What kind of accuracy do the models have ... 2% or 5% or 10% error?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
How much data do we need to use for training the algorithm for stresss-strain or indentation property prediction?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
For a prediction model using images, is there a min resolution needed for good predictability?
No, because we predict the same resolution as the original pictures, and no more. There are other techniques which allow the enhancement of the predicted image (via interpolation).
In general what is the size of training data recommended?
We consider that the minimum number is (n+1) with n being the number of dimensions (parameters). One can go up to 2n, 3n, or even n^2 if needed.
If the size/position of design in a image varies within same size bounding box, what is the impact on predictability?
This can be identified and adjusted during the learning process.
3. The Intersection of AI/ML, Optimization and Reliability
-
Download
-
Download
-
Q&AWill AI/ML going to remove meshing in FEA or improve the solving performance by neural network?
It may not remove completely but will definitely contribute to automation. Not sure for equation solver.
-
Download
-
Q&A
What data pattern is better to use LSTM than ARIMA model?
- ARIMA assumes stationary data (which don't change much).
- or multiple outcom, LSTM is prefered.
- It is supposed to be more accurate but costly and requiring large data sets.
From your experience so far, which method is the most widely used and/or should be the most used in a near future?
ARIMA is a very popular method for 1-D forecasting (VAR methods may be used for multidimensional forecasting). I believe that DMD and LTSM are the best.
In the last example, how many data points were used for LSTM model ?MLP had 3000 and 10000 data points
Around 3700.
Can SVD be used in real-time applications considering its compute-intensive nature?
Yes. The decomposition is made only once and can be updated every now and then. The cost is not high.
Have you come across an application where the problem was extremely hard to be cast into the form of matrices and therefore ARIMA was rendered inapplicable?
Yes, ARIMA works well for one-D, stationry processes. One needs to use more advanced (Mixture Models) for more complex cases. The setup of the problem is a bit tedious.