You probably don’t think about the clock in your computer too much, unless perhaps you have an important deadline coming up, but knowing a little bit about how your computer keeps time can keep your system and your network running smoothly. Plus, computer technology has changed the way we track, log and record time, which is pretty interesting in itself. Here we'll take a look at how computers keep time.
Unix Time
Forgive me for being a little Unix-centric, but a good chunk of the servers on the internet use Unix time. What is Unix time? It’s actually pretty simple. It’s the number of seconds that have passed since midnight on January 1, 1970, UTC. (I’ll explain UTC a little later in this article.) This is known as "the epoch."
Many Unix and Linux systems compute time by computing epoch seconds and converting them into local time. The advantage of this is that it’s pretty easy to calculate the difference between two dates and times. If I want to find out how much time has passed since midnight on January 1, 1970, and right now, it’s just a matter of simple subtraction. The Perl programming language has the ability to calculate epoch seconds for any time you wish to imagine. (You can learn more about Perl in Perl 101.)
There are also a number of fun patterns that come up as the seconds since 1970 tick on. Wikipedia has a list of them. For example, on February 13, 2009, the number of seconds passed reached 1,234,567,890. Yes, that’s all the numbers starting from one. There were parties in technical communities around the world to celebrate. And for those of you who aren't quite that geeky, no, I am not making this up.
A more serious consequence of keeping time this way is the 2038 problem. Without going into too much detail, in 2038 the number of seconds passed will be too big to keep in a 32-bit signed integer. Numerous systems, including embedded computers, still use 32-bit processors. We still have plenty of time to convert to 64-bit systems or find some other workarounds, but if you remember the Y2K debacle, where people scrambled around to fix that problem at the last moment, sometimes there just isn't a will to do something about these things upfront.
When we do switch over completely to 64-bit processors, we’ll have until 292,277,026,596 seconds before we have to switch processors again. At that point, however, humanity is likely to have more pressing concerns than their computer clocks – the Sun will have long swallowed up the Earth by then.
UTC
Although UTC, or Coordinated Universal Time, isn’t strictly limited to computers, it’s important in the way their clocks run. It’s a replacement for Greenwich Mean Time that takes into account the slowing down of the Earth. The prime meridian upon which this calculation is based is still located at the Greenwich Observatory in England. Why there? It’s a holdover of the British Empire.
Time zones are represented as offsets of the prime meridian. For example, I live in the Pacific time zone, which is UTC-8. And during daylight saving time, it’s actually UTC-7.
UTC is used in a variety of contexts to get rid of ambiguities about time zones, including in aviation, weather forecasting and computing. Most machines represent the local time zone as an offset of UTC, but most servers on the internet express time in UTC. You can check your email headings for proof.
NTP
While servers use clocks set to UTC, computer clocks have a notorious habit of slowing down. Having a bunch of out-of-sync clocks can wreak havoc with things like email, which depends on timing. That’s why Network Time Protocol came about. It’s been around since the '80s, keeping computer clocks perfectly synced with NTP. You usually don’t have to think about it. Most of the time, all you have to do is enable NTP on your system, either through configuration files or a control panel, and NTP will take care of the rest by contacting servers and synchronizing the computer clock periodically. (Learn more in How Network Time Protocol Keeps the Internet Ticking.)
Fractional Time
An interesting way to represent time is by using fractional days. It’s a form of decimal time that represents the time as a percentage of the day that has passed. For example, midnight is 0.00, noon is 0.50, 6:00 p.m. is 0.75 and so on.
To get the current time as a fractional day, divide the current minute by 60 and append that to the hour. For example, if it’s currently 1:24 p.m., then 24 divided by 60 is .40, giving 13.40. Dividing that by 24 yields .56. You also can have any precision you want. For example, I could have written the time as .5583333. The advantage to keeping time this way is that, like the epoch seconds mentioned above, computing the differences between two times is just a matter of simple subtraction.
ISO 8601
If you’ve ever been abroad, you’ll now that there are a lot of different ways to represent dates. In the United States, the month usually comes first, so that January 15, 2018 would be represented as 1/15/18. In other places, the day comes first, as in 15/1/18. This can cause some problems when communicating with people in different countries.
An international standard, ISO 8601, tries to solve some of these problems. It’s pretty simple: YYYY-MM-DD. Coming back to our example, represented according to ISO 8601, it would look like this: 2018-01-15. It’s unambiguous, and "big-endian" because the year comes first. This standard also makes it relatively easy for computers to sort things by date. Other variations append UTC or show the number of days passed in the year.
All In Good Time
Timing is important, and even more important for computers. Hopefully, this article gives you a sense of how computers keep track of time behind the scenes.