If you want it to be fast to minimize the time it impacts your production traffic. In that case, the best solution is to run backups on a replica. That will have zero impact on your production traffic (except that you need to enable the binlog on the master, which has a nonzero load on I/O). Any locking or pausing caused by the backup will only affect the replica. After the backup is finished, the replica will resume replicating changes and it will catch up.
You should be concerned not only with the speed of backup, but the speed of restore. Several other answers have suggested mysqldump, but restoring a backup created with mysqldump runs thousands of CREATE and INSERT statements contained within the backup. This can take hours, depending on your hardware and the size of your databases.
Percona XtraBackup (http://www.percona.com/doc/percona-xtrabackup/) is a very good alternative, because it's faster than mysqldump to create the backup (except in a couple of edge cases), and the format of the backup can literally be used to replace the data directory; you can restore it as fast as you can rsync the files into place.
Finally, I want to remind you to verify your backups. I've known some businesses that had an emergency and went to restore from their backup, only to discover that their last six months of backups are unusable for some reason. The best way to verify the backup is to try restoring it on a test server, and then run a few ad hoc queries to confirm it restored the data you expect it to. And of course, you can always ask your web host service provider to do that for you.