To use computers well, learn their rules

By dkl9, written 2023-186, revised 2023-186 (0 revisions)

To use computers effectively, you don't need to learn a lot about particular programs and features. Learn the near-universal rules of computing and common interface designs. The details are readily inferred.

The rules also apply to things beyond what you might think of as "computers". Smartphones are computers.

Here are some of those rules to demonstrate the technique, and to help you learn it.

§ How computers use numbers

Computers have limited memory, so they represent a number with (usually) a tiny amount of memory. Often, that amount is 16, 32, or 64 bits. In case that's meaningless to you: that means each number can only be one of a limited set of values. 66 thousand, 4.3 billion, 18 quintillion options, respectively, for those sizes.

If what you do introduces huge or precise numbers, at some point along the path of size or precision, the program doesn't handle the numbers quite as well, or, in some cases, breaks.

If you try to make the number too big, it's handled with less precision than you expect. For a clear example of this: try putting 10^20 + 999 in a calculator. Some calculators are forced to round it to 10^20, dropping the 999.

If you try to make the number too precise, the little details will be cut off and won't help.

§ Quadratic scaling

Some things in computing, such as images or link-networks, have an inherent scale proportional to the square of the obvious variable. An image has around as many pixels as the square of its width. A link-network has around half as many potential links as the square of the number of items it can link. The square of a number grows much faster than the original number. Doubling the original quadruples the square.

The more detailed study of matters like these is computational complexity.

§ The size of data

Typical units are based around the byte (representing one character, at least in English plaintext), and extend it to the kilobyte/KB, megabyte/MB, gigabyte/GB, and terabyte/TB. Those last terms, respectively, mean 1 000, 1 000 000, 1 000 000 000, and 1 000 000 000 000 bytes, each one a thousand times the last. Sometimes (for good reason) the factor each time is instead the nearby 1 024: 1 024, 1 048 576, 1 073 741 824, 1 099 511 627 776.

The size of a file (or file-equivalent dataset) gives a hint as to how long it'll take to process. On machines of the 2010s, for straightforward operations (transferring, searching, reformatting, etc):

Of course, the time an operation takes varies depending on what exactly you're doing. Some are a bit simpler and can be done faster than what I've listed. Many tasks are much more complex and can take far longer. A few don't really depend on the size of the data.

§ Why tasks get stuck

If something you run on a computer doesn't finish after a long wait, and doesn't show an error:

  1. Consider the size of its task (see previous section). If practical, try doing the same thing with a much smaller file, to see if it's a matter of data size or something else. Evaluate if the time taken really exceeds what you should expect.
  2. Consider if there's an unavailable resource it might be waiting for. Network connections are the first that come to mind.
  3. It is distressingly common for programs to get stuck sith they enter an accidental infinite loop.

§ Autocompletion

Many systems that expect you to enter somewhat-predictable text (writing with real words, filenames, usernames, etc) have a way to help you with that. Sometimes it displays automatically and you just have to select it, often with arrow-keys and Enter. Sometimes you have to trigger it, often with Tab.

§ Principles of filesystems

§ Configuration files

Many programs (especially the more complex ones) have configuration files associated with them. All of the following are often, but far from always, the case about configuration files:

§ Principles of the web