Fuzz testing or Fuzzing is a software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data injection: it's a type of Black Box Testing, which means testing a closed system on the comportemental point of view.
A trivial example:
Lets's consider an integer in a program, which stores the result of a user's choice between 3 questions. When the user picks one, the choice will be 0, 1 or 2. Which makes three practical cases. But what if we transmit 3, or 255 ? We can, because integers are stored a static size variable. If the default switch case hasn't been implemented securely, the program may crash and lead to "classical" security issues: (un)exploitable buffer overflows, DoS, ...
Fuzzing is the art of automatic bug finding, and it's role is to find software implementation faults, and identify them if possible.
A fuzzer is a program which will inject semi-random data into a program/stack and detect problems.
The data-generation part is made of generators, and the error detection relies on debugging tools.
A quite "efficient" fuzzer is:
- protocol/file-format dependant
- data-type dependant
Why? Because a program only understands structured-enough data. If you connect to a web server in a raw way, it will only respond to listed commands such as GET (or eventually crash). If you try totally random inputs, it will in most cases respond with a "non-implemented method" error.
Take a 10 chars long string, supposed to fuzz a webserver: we generate all the possible strings doable within this string. It makes a lot of possibilities, right? What are the odds that you find the ones that start with "GET " ?
That's why a fuzzer has to understand at least the basics of a protocol / file format.
Comparison with Cryptanalysis
The number of possible tryable solutions is the explorable solutions space. The aim of Cryptanalysis is to reduce this space, which means finding a way of having less keys to try than pure bruteforce to decrypt something.
In this regard, Fuzzers try to reduce the number of unuseful tests: the values we already know that here's little chance they'll work.
There's a big difference with Crypto though: the more "intelligent" your fuzzer is, the less weird errors it will find: you reduce impredictibility, in favor of speed. That's a compromise.
A fuzzer would try combinations of attacks on:
- numbers (signed/unsigned integers/float...)
- metadata : text-valuable information (mostly text)
- the protocol/file format itself (field size attacks, pure binary sequences...)
A quick way (and common approach) to fuzzing is, for each types of data defined here, to define lists of "known-to-be-dangerous values" (fuzz vectors), and to inject them in place of the classical values, see  for real-life examples.
Protocols and file formats imply norms, which are sometimes blurry, very complicated or badly implemented : that's why developers sometimes mess up in the implementation process (because of time/cost constraints). That's why it can be interesting to take the opposite approach: take a norm, look at all mandatory features and constraints, and try all of them: forbidden/reserved values, linked parameters, field sizes (a lot of implementations are said not to verify field sizes). That would be hand-fuzzing.
Fuzzers usually tend to try one-level-imbrication-level attacks, which means changing only one parameter at a time. A highly intelligent fuzzer would detect/know linked parameters, and play on them.
Fuzzing tools can detect trivial errors quite easily, but are less gifted with complex ones (in terms of imbrication level).
Another problem is that when you do some black-box-testing, you usually attack a closed system, which increases difficulty to evaluate the dangerosity of the found vulnerability.
Technical resources on OWASP
Fuzzing vectors 
Wikipedia article 
Fuzzing-related papers 
The ultimate fuzzers list @ infosec 
Another list @ hacksafe 
Hachoir, a generic parser: could be quite a good starting point 
The fuzzing mailing list 
Codenomicon's product suite: