Most ways of counting and measuring things work logically. When the thing you're measuring increases, the number gets bigger. When you gain weight, the scale doesn't tell you a smaller number of kilograms or pounds. But things are not so sensible in astronomy, at least not when it comes to the brightnesses of stars.
Star magnitudes do count backward, the result of an ancient fluke that seemed like a good idea at the time. Since then the history of the magnitude scale is, like so much else in astronomy, the history of increasing scientific precision being built on an ungainly historical foundation that was too deeply rooted for anyone to bulldoze it and start fresh.
The story begins around 129 B.C., when the Greek astronomer
Hipparchus produced the first well-known star catalog. Hipparchus ranked his stars in a simple way. He called the brightest ones "of the first magnitude," simply meaning "the biggest." Stars not so bright he called "of the second magnitude," second biggest. The faintest stars he could see he called "of the sixth magnitude." This system was copied by
Claudius Ptolemy in his own list of stars around A.D. 140. Sometimes Ptolemy added the words "greater" or "smaller" to distinguish between stars within a magnitude class. Ptolemy's works remained the basic astronomy texts for the next 1,400 years, so everyone used the system of first to sixth magnitudes. It worked just fine......
......Galileo forced the first change. On turning his newly made telescopes to the sky, Galileo discovered that stars existed that were fainter than Ptolemy's sixth magnitude. "Indeed, with the glass you will detect below stars of the sixth magnitude such a crowd of others that escape natural sight that it is hardly believable," he exulted in his 1610 tract,
Sidereus Nuncius. "The largest of these...we may designate as of the seventh magnitude...." Thus did a new term enter the astronomical language, and the magnitude scale became open-ended. Now there could be no turning back. .......
.......The resulting magnitude scale is logarithmic, in neat agreement with the 1850s belief that all human senses are logarithmic in their response to stimuli. (The decibel scale for rating loudness was likewise made logarithmic.) Alas, it's not quite so, not for brightness, sound, or anything else. Our perceptions of the world follow
power-law curves, not logarithmic ones. Thus a star of magnitude 3.0 does not in fact look exactly halfway in brightness between 2.0 and 4.0. It looks a little fainter than that. The star that looks halfway between 2.0 and 4.0 will be about magnitude 2.8. The wider the magnitude gap, the greater this discrepancy.
Source:
Stellar Magnitude