A good practice for documenting idea rating sheets results is to type them in to a spreadsheet (see Spreadsheet Template in the Library of Resources). Once you have columns counting the number of dots in each option on the scale of agreement for each sheet, you have the ability to apply an equation for an "agreement score" which is most useful for sorting results.
Below is a recommended equation to calculate an agreement score:
(Strong_Agreement_Dots * 10 + Agreement_Dots * 5 + Disagreement_Dots * -5 + Strong_Disagreement_Dots * -10)
/ (Strong_Agreement_Dots + Agreement_Dots + Neutral_Dots + Disagreement_Dots + Strong_Disagreement_Dots)
= AGREEMENT SCORE
The result is an absolute score between -10 (all "Strong Disagreement") and 10 (all "Strong Agreement").
I would recommend that when sorting results by this agreement score you also filter to only include sheets that received a sensible minimm number of dots. For example if a sheet only got 5 dots out of 25 participants, the results are not representative enough to take seriously. I might suggest only consider agreement scores for sheets that have a minimum of 12 total dots (& signatures) or at least 2/3 the number of participants if less than 18 total participants.
Note that "Confusion" or "Not Sure" dots are not included in the scoring. Again, you may want to filter out results that got, for example, three or more 'confusion' dots, as it means participants may not have had a consistent understanding of what they were rating.
You should also be wary of the number of signatures vs number of dots. If there is a significant difference it may signal fraudulant dotting.
If you want to get really fancy, you can embed or link to scaned images of each sheet right in the spreadsheet.
Send me you examples and suggestions of using an agreement score or post a comment below.