I’m not sure that using a probabilistic model (which will fail, for sure, at some point) is relevant to proving certain properties of code

Well, we have two methods here, one of which is mathematically sound, and one

that is at least covering the majority of vulnerabilities that would fit inside of a normal

bell curve distribution.

The probabilistic (fuzzing) model, as you described will not catch every vulnerability

under the bell curve but it is still important in testing functions and in actually

discovering those statistical edge cases where the function’s behaviour and output

is unexpected.

No I was talking about LLMs, that are probabilistics models, and the idea of translating code with them to prove something. That sounds weird to me, but I’ll be happy to be wrong

You are saying that the LLM will not accurately convert from one language to another

with 100% accuracy? The converted statements would have to be verified some other

way?

Yes that’s how LLMs work

What about fuzzing a function before the conversion and after the conversion and

comparing the output of both to ensure they are identical?

What if each element of output from both fuzzing samples (before and after) was

passed as a parameter into a string comparison function and compared for

perfect accuracy.

What I’m trying to say here is that you have a function and then convert it using the

LLM to “coq”(formal langauge), then you fuzz the before and after conversion and

compare the output to ensure they are identical. So you have python function output in

one parameter of a stringcompare and then you have the coq function output in the

other parameter. The output of both functions must be identical.

Is the OSS-fuzz tool compiled with the python libraries? Is it compiled into dom0?

Do users have access to these fuzzing functions via dom0?