Another use of copy-on-write is in the calloc function. This can be implemented by having a page of physical memory filled with zeroes. When the memory is allocated, the pages returned all refer to the page of zeroes and are all marked as copy-on-write. This way, the amount of physical memory allocated for the process does not increase until data is written. This is typically only done for larger allocations.
Copy-on-write can be implemented by telling the MMU that certain pages in the process's address space are read-only. When data is written to these pages, the MMU raises an exception which is handled by the kernel, which allocates new space in physical memory and makes the page being written to correspond to that new location in physical memory.
One major advantage of copy-on-write is the ability to use memory sparsely. Because the usage of physical memory only increases as data is stored in it, very efficient hash tables can be implemented which only use little more physical memory than is necessary to store the objects they contain. However, such programs run the risk of running out of virtual address space -- virtual pages unused by the hash table cannot be used by other parts of the program.
Problems with copy-on-write are that it makes the operating system's kernel more complicated, but the concerns are similar to those raised by more basic virtual memory concerns such as swapping pages to disk, i.e., when the kernel writes to pages it must copy them if they are marked copy-on-write.