PV1 nnenum: The Neural Network Enumeration Toolgiven an unsafe set an a neural network, checks whether the set overlaps with the output of the network

Application domain/field

Type of tool

Neural network verifier

Expected input

?

Expected output

safe or unsafe. For neural networks this means that they check whether the output of the neural network (for a given input set) overlaps with the provided unsafe set. If they do not overlap, then the network is considered safe.

Internals

nnenum focuses on the verification of fully-connected, feedforward neural networks with ReLU activation functions.
Neural network

Links

Repository: https://github.com/stanleybak/nnenum

Related papers

Improved Geometric Path Enumeration for Verifying ReLU Neural Networks (CAV 2020)

Last publication date

14 July 2020

Related tools

Other tools for the verification of neural networks: Marabou, Neurify, NNV.

ProVerB specific



ProVerB is a part of SLEBoK. Last updated: February 2023.