This paper describes two experiments that examined overconfidence in spreadsheet development. Overconfidence has been seen widely in spreadsheet development and could account for the rarity of testing by end-user spreadsheet developers. The first experiment studied a new way of measuring overconfidence. It demonstrated that overconfidence really is strong among spreadsheet developers. The second experiment attempted to reduce overconfidence by telling subjects in the treatment group the percentage of students who made errors on the task in the past. This warning did reduce overconfidence, and it reduced errors somewhat, although not enough to make spreadsheet development safe.