It is simple to prove that this is the same condition on L as continuity, for the topologies induced from the norms.
In the case of a matrix A acting as a linear transformation, from Rm to Rn, or from Cm to Cn, one can prove directly that A must be bounded. In fact the function
In general the operator norm of a bounded linear transformation L from V to W, where V and W are both normed real vector spaces (or both normed complex) vector spaces is defined as the supremum of the ||L(v)|| taken over all v in V of norm 1. This definition uses the property ||c.v|| = |c|.||v|| where c is a scalar, to restrict attention to v with ||v|| = 1. Geometrically we need (for real scalars) to look at one vector only on each ray out from the origin 0.
Note that there are two different norms here: that in V and that in W. Even if V = W we might wish to take distinct norms on it. In fact given two norms ||.|| and |||.||| on V, the identity operator on V will have an operator norm, in passing from V with ||.|| as norm to V with |||.|||, only if we can say
This is, however, a phenomenon of finite dimensions: that all norms will turn out to be equivalent: they stay within constant multiples of each other, and from a topological point of view they give the same open sets. This all fails for infinite-dimensional spaces. This can be seen, for example, by considering the differentiation operator D, as applied to trigonometric polynomials. We can take the root mean square as norm: since D(einx) = ineinx, the norms of D applied to the finite-dimensional subspaces of the Hilbert space grow beyond any bounds. Therefore an operator as fundamental as D can fail to have an operator norm.
A basic theorem applies the Baire category theorem to show that if L has as domain and range Banach spaces, it will be bounded. That is, in the example just given, D must not be defined on all square-integrable Fourier series; and indeed we know that they can represent continuous but nowhere differentiable functions. The intuition is that if L magnifies norms of some vectors by as large a number as we choose, we should be able to condense singularities - choose a vector v that sums up others for which it would be contradictory for ||L(v)|| to be finite - showing that the domain of L cannot be the whole of V.