Shield does automatically allow legitimate bots/crawler from services like Google, but doesn't automatically block bad bots.
Shield does, however, automatically block bad bots based on their behaviour. If they are attempting any actions that trigger the Shield security plugin, they will be black-marked, and eventually blocked completely. This is regardless of their bot "user agent" which you're attempting to block.
If a bot is actually "bad", there's no way it's going to tell it's bad by giving it a label. The best way to know if a bot is bad, is based on its behaviour - and Shield handles this.
While it's not a huge security issue, we'll probably allow people to specify user-agents to manually block bots that don't honour the robots.txt (in a future release).