-
New Feature
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
None
We should give users some indication of how long their evaluation job might take to finish. There are some ways we can do this:
- We store the runtime of each job we do, and slowly do a regression based on the number of items in the dataset. We should be able to come up with a relationship between number of files and runtime.
- We know that we do 720 permutations of parameters. We could hook into the loop here and update the eta at the end of each permutation
This may also require knowledge of how many jobs are simultaneously running on different cores.
As an extension to this we may also want to tell people with jobs in the queue how long they might have to wait for their job to start.