Log profiles predicted by the ML model, profiles tuned with the profile inference guards, and profiles collected via instrumentation profiling.
The value should point to an ML inference log file.

The `LogMLInference` flag introduces the ML Debug Tool mode that enables debugging of the ML profile inference in
the Graal compiler.

# Usage

1) Position to the `graal-enterprise/substratevm-enterprise/` directory.

2) Code program you want to inspect, for example, `SortExample.java`:

    import java.util.Arrays;
    import java.util.Random;

    public class SortExample {

        public static void main(String args[]) {
            int n = 30_000_000;
            int[] data = generateRandomArray(n, 0L);
            long startTime = System.nanoTime();
            heapSort(data, 0, data.length);
            long stopTime = System.nanoTime();
            long delta = (stopTime - startTime) / 1_000_000;
            System.out.println("Array of " + n + " elements sorted in: " + delta + " miliseconds.");
        }

        private static void heapSort(int[] a, int low, int high) {
            for (int k = (low + high) >>> 1; k > low; ) {
                pushDown(a, --k, a[k], low, high);
            }
            while (--high > low) {
                int max = a[low];
                pushDown(a, low, a[high], low, high);
                a[high] = max;
            }
        }

        private static void pushDown(int[] a, int p, int value, int low, int high) {
            for (int k ;; a[p] = a[p = k]) {
                k = (p << 1) - low + 2; // Index of the right child
                if (k > high) {
                    break;
                }
                if (k == high || a[k] < a[k - 1]) {
                    --k;
                }
                if (a[k] <= value) {
                    break;
                }
            }
            a[p] = value;
        }

        public static int[] generateRandomArray(int n, long seed) {
            Random random = new Random(seed);
            int[] arr = new int[n];
            for (int i = 0; i < n; i++) {
                arr[i] = random.nextInt();
            }
            return arr;
        }
    }

3) Run the `mx ml debug` command:

   mx ml debug --input-file SortExample.java

The tool compiles the input program, runs the ML inference to predict branch profiles and apply the profile inference
heuristics to tune the predicted profiles. Also, the tool performs the instrumented-image build and profile-collection
run to collect branch profiles. The tool generates a .csv report in which you can find details about the particular
prediction. For each `If` node the tool reports its declaring class, name of the method in which the node is located,
node, node source position, block, block frequency, profiled execution probability of the true branch of the node,
predicted execution probability of the true branch of the node, execution probability tuned by the profile inference
heuristics (guards), global execution frequency of the node, execution count of the node, absolute error and the
weighted absolute error (absolute error multiplied by the global execution frequency of a node) of the ML prediction.
Nodes are sorted in decreasing order according to the weighted absolute error.

4) If you want to plot the graphs of the specific methods, please use the argument `--method-filter`.
   For example, to plot the graphs of the `heapSort` and `pushDown` methods from the previous example, run:

   mx ml debug --input-file SortExample.java --method-filter=heapSort,pushDown

This way, besides profile logging, the tool runs the Ideal Graph Visualizer (IGV) in the background and draw the graphs
of the methods matching the regex filter.

# Benchmarks

To debug the ML predictions in the native image benchmarks we introduce the `profile-inference-debug-ee` JVM config
that enables benchmarks to run in the debug mode. For example, to log profiles in the `scala-kmeans` benchmark, run:

    mx --java-home $JAVA_HOME --env ni-ee benchmark --ignore-suite-commit-info nativeimage-benchmarks
    renaissance-native-image:scala-kmeans -- --jvm=native-image --jvm-config=profile-inference-debug-ee

# Logging the ML predictions without the `mx` tool

To log the ML predictions when using the native-image command please provide the collected profiles, enable the ML
feature extraction and ML profile inference in the command line, and provide the log file for the profile comparison.
For example, to debug the `HeapSort` example, please run:

    native-image --pgo=default.iprof HeapSort -H:+MLGraphFeaturesExtraction -H:+MLProfileInference
    -H:LogMLInference=ml-inference-log.sortexample.csv

To debug the specific functions (e.g., `findClosest`) in the IGV, run:

    mx --java-home $JAVA_HOME --env ni-ee benchmark --ignore-suite-commit-info nativeimage-benchmarks renaissance-native-image:scala-kmeans --
    --jvm=native-image --jvm-config=profile-inference-debug-ee -Dnative-image.benchmark.extra-image-build-argument=-H:Dump=:2
    -Dnative-image.benchmark.extra-image-build-argument=-H:PrintGraph=Network -Dnative-image.benchmark.extra-image-build-argument=-H:MethodFilter=findClosest