We are pleased to (finally) release RFO Benchmark v2!
Please post your results in
RFO Benchmark v2 2016 results
RFO Benchmark v2 2015 results
RFO Benchmark v2 2014 results
RFO Benchmark v2 Virtualization results
This new test is a little easier to work with, simply pick the "Set" you want to run, as defined by the shortcuts in the root Benchmark folder, and double click. Your results file will end up here as well.
There are three "sets" be default.
- Standard: (best for posting on RFO to make comparisons) This set includes the same benchmark modules as the old test, but it runs three times and averages the results.
Note: The journals here are similar to the old test, but not identical, so results should NOT be compared with results from the old test.
- Expanded: (best for testing to make purchase decisions) This set adds new benchmarks, expands on existing benchmarks and builds a "heavier" model to more realistically test hardware. It also runs 5 times, throws out high and low outliers for each benchmark and averages the rest.
Note: This test can take a LONG time to complete, even for you speed demons getting double digit results on the old test.
- Simplified: (best for a quick benchmark) This set does away with the Render benchmark and the non Hardware Accelerated Graphics benchmark, and only runs once. It also provides an example of the "Messages" functionality, which displays which benchmark module is being run. This can be helpful when also monitoring CPU, GPU or RAM utilization, to see where your bottlenecks are.
A few things to be aware of
- All options are defined as arguments in the shortcut. Options include:
-testSet:??? controls which set, as defined in the XML file, is run.
-runCount:# sets the number of times the set should be repeated.
-csv produces a Comma Separated Values file of the raw data, as well as the regular "Results" file formatted for reading.
-messages produces a windows popup message as each benchmark module starts.
- When runCount is between 2 & 4, results are averaged. When runCount is 5 or higher, high and low outliers are dropped and remaining data averaged. All raw data is tabulated at the bottom of the Results file, as well as in the CSV file. More than 5 runs is probably extraneous.
- Sets and Benchmarks are defined in the XML, along with behaviors like notes, grouping and summing, Revit INI modifications, etc. This makes the tool much more modular, so you can use it as a jig for creating your own benchmarks.
- Thanks to some great input from the Factory, the Journals have been modified to eliminate some potential misbehavior (the PermissiveJournal debug mode), address some benchmarks that were either not reporting the full time to complete or where reporting extraneous time, as well as to make maintenance easier moving forward. This does, however, mean the results can not be compared with results from the earlier test.
- A couple of interesting techniques are used in the Journal files, for you Journal nerds, including...
VBScript Conditionals: Journals allow for VBScript, which can include conditional journal blocks. The Model_creation journal has an example of this related to Revit Structure.
Disarticulated Journals: A single recorded journal can be broken apart, Save and Open sections added along with the data needed to reestablish the processing environment, and a much more modular journal structure results. An example of this can be seen in the New_file and Model_creation journals, which used to be a single Journal file.
Optional Journals: Using Disarticulated journals, an optional journal can be inserted into a benchmark sequence. An example of this can be seen in the Array_link journal used in the Expanded test.
I hope this techniques prove useful for folks looking to expand on their leveraging of Journal files. And again, big thanks to the Factory for cluing me in on these possibilities.
- If you have any problems, please report them in this thread so the test can be revised and a new build issued.
Please keep all discussions about the tools in this thread, and use the Result threads ONLY for results. One last note. Some folks have had issues with the downloads being corrupted. If you have this issue, I recommend you try the download with a browser that is NOT Internet Explorer. Maybe IE9 is better, but even IE8 sometimes results in corrupted download. Chrome and Safari for Windows have both worked well for me, and others have reported good results with Firefox.Thanks all, I hope you find RFOBenchmark useful.
If you are looking for the old Benchmark and results it can be found here.
NOTE: Downloads updated to version 2.1, which addresses the European decimal bug, amongst other things. Comparisons between test run on v2.0 and v2.1 are valid comparisons.