A clock consists of:
Traditional clocks have a counter which directly represents civil time (12 hours per half day or 24 hours per day, 60 minutes per hour, etc.) but in computing it's more usual to have a binary counter with a separate mechanism to translate its value into civil time.
The visible part of the counter does not necessarily count ticks directly: its low-order digits may be invisible. For example, wrist watches often have a quartz oscillator that ticks at 32,768 Hz but a fastest hand that moves once per second. On the other hand, in computing the counter may have a higher resolution than the tick granularity. For example, on Unix gettimeofday() returns the value of a counter with a resolution of 1μs but a granularity of typically 100 Hz or 1000 Hz. A counter's interval is the amount it is incremented by for each tick, so 10,000 or 1000 for the gettimeofday() example.
A real-time clock or RTC can be used to keep civil time. As well as an oscillator and a counter, it has an epoch, which is the point in civil time at which the counter's value was zero, and a mapping between other values of the counter and civil time. This mapping is generally complicated and dependent on what kind of civil time you want.
In my terminology, a clock that is not a real-time clock is a timer.
An interval timer counts time like a real-time clock but does not have a mapping to civil time.
A countdown timer has a negative interval and typically triggers some action when its counter reaches zero.
An activity timer does not tick continuously but only when some activity is being performed, for example when a process is running on the CPU. It may even tick faster than its nominal frequency, for example if a process has threads running on multiple CPUs.
The terminology I use when correcting a clock varies depending on which of the clock's parameters I am changing.
You can reset the counter to step the clock to the correct time. Unexpected resets can confuse software that makes unwarranted assumptions about a clock.
You can adjust the rate of a clock. This usually means that the counter interval is changed, not the frequency of the hardware oscillator! This might be done to slew the clock to the correct time as well as to correct a systematic error in the clock's nominal frequency.
You can change the epoch of a clock. In practice most clocks have fixed epochs and are corrected by resetting their counters.
Clocks may be corrected automatically. A good way of doing this on a networked machine is with NTP which avoids resetting the clock if it can, but it has been common for crappier systems to use less clever protocols.
A clock that is stabilized is automatically adjusted to conform to a frequency standard. A clock that is synchronized is automatically corrected to conform to civil time. It's possible to have one without the other in either direction.
A monotonic clock cannot be reset, so you can only make gross corrections by changing its epoch. It's common for computer systems to provide monotonic clocks without any mapping to civil time, in which case they are there for use as interval timers.
A clock that keeps atomic time counts SI seconds and has a trivial mapping to TAI. (Monotonic clocks keep atomic time.) The system must have an up-to-date table of leap seconds in order to map atomic-time clocks to civil time.
Clocks that don't keep atomic time keep some kind of universal time. The POSIX and NTP clocks are examples. They have to have some kind of fudge to deal with UTC leap seconds. They may reset the clock or vary its rate on or near a leap second.
Some very limited computer systems do not have any sophistication in the mapping between their clock and civil time, so the clock just represents local time fairly directly. This implies that the clock is reset when Summer Time transitions occur.
It is common for clocks to be made more complicated so that they include some of the semantics of civil time. They may represent the counter with a non-uniform base similar to the one we use for writing times in natural language, or include mechanisms for handling leap seconds. (A linear counter has a simple integer or fixed-point representation.)
For example, the clock accessed via ntp_gettime() has an additional field that can warn about an impending leap second or that one is currently in progress.
For a hypothetical example, counters like the one returned by gettimeofday() have separate parts that count seconds and fractions of seconds. Markus Kuhn suggests that you could represent leap seconds as double-length seconds where the sub-second part of the counter increments past the point where it would normally carry into the seconds part.
These complications are motivated by keeping the mapping between counter values and civil time simple. Since time is promulgated in the form of UTC rather than TAI, and common external time representations (POSIX, NTP, etc) are based on UT, this makes some sense. However it means that the clock is not suitable for use as the basis of an interval or countdown timer.
Ideally the kernel could just provide simple time counters and leave all civil time handling to userland. Unfortunately that isn't quite possible because of API compatibility constraints (POSIX time) and file systems have to store time stamps as representations of civil time. However the kernel does not have to deal with times other than the present, or simple interval computations. So it could just represent other clocks as epoch adjustments to a master monotonic clock.