The word "proved" above means, in effect: proved by a method whose validity can be checked algorithmically, for example, by a computer (although no such machines existed in 1929).
A logical formula is called universally valid if it is true in every possible domain and with every possible interpretation, inside that domain, of non-constant symbols used in the formula. To say that it can be proved means that there exists a formal proof of that formula which uses only the logical axioms and rules of inference adopted in some particular formalisation of first-order predicate calculus.
The theorem can be seen as a justification of the logical axioms and inference rules of first-order logic. The rules are "complete" in the sense that they are strong enough to prove every universally valid statement. It was already known earlier that only universally valid statements can be proven in first-order logic.
To cleanly state Gödel's completeness theorem, one has to refer to an underlying set theory in order to clarify what the word "domain" in the definition of "universally valid" means.
The branch of mathematical logic that deals with what is true in different domains and under different interpretations is model theory; the branch that deals with what can be formally proved is proof theory. The completeness theorem, therefore, establishes a fundamental connection between what can be proved and what is (universally) true; between model theory and proof theory; between semantics and syntax in mathematical logic. It should not, however, be misinterpreted as obliterating the difference between these two concepts; in fact, another celebrated result by the same author, Gödel's incompleteness theorem, shows that there are inherent limitations in what can be achieved with formal proofs in mathematics.
For an explanation of Gödel's original proof of the theorem, see
original proof.
Further Reading
External Link