The integration of AI technologies into control systems promises new opportunities to increase the availability and performance of devices and improve their safety. However, with regard to functional safety (FUSA), special requirements apply to the development and integration of AI. The reason is the following: while it is clearly possible to say whether classic systems meet their requirements, the performance of AI technologies can often only be tested using statistics - with a corresponding variance.AI integration into safety-related systems: What does this mean for development?
This variance must be taken into account in the development of AI technology and reconciled with the risk of functional insufficiency.
NewTec is your partner when it comes to a competent consideration of the safety aspects of AI integration and compliance with regulatory requirements (keyword EU AI Act). We support you in system, error and risk analysis - by identifying possible critical AI failures, determining the necessary accuracy and performance to prevent failures and analyzing the risks associated with a possible failure. Using process analysis, we clarify whether your development processes are already suitable for AI technology and where there may be a need for action to avoid systematic errors. We determine for you whether planned hardware is reliable enough for the operation of AI technology. And our experts will advise and support you with regard to approval - for example by identifying the relevant normative and regulatory requirements.Procedure model in the development cycleAI services from NewTec
- Identification of specific error modes of the AI technology used
- Analysis of the possible error propagation of the AI technology along the information flow in the system context
- Analysis of possible risks due to failures that can be caused by the AI technology
- Derivation of target values for the functional suitability of the AI technology (e.g. required accuracy)
- Validation of the target values with regard to achievability in the system context with the necessary confidence
- Hardware analysis to determine the probability of random errors on complex hardware for the execution of AI technologies
- Analysis of the specific development processes of the AI technology used with regard to ensuring systematic capability
- Preparation of the developed content and preparation of a presentation to test bodies