The term “Bytes” (which sounds like “bites”) was first to used in July 1956 by a PhD at IBM. He spelled it that way to keep typos of “bites” from becoming “bits”. The short answer is that bits measure the amount of data sent on a network, and bytes measure data stored in the computer memory and drives.
The word “bit” is short for binary digit. It is a single one or zero. If it is a 1, the bit is “on” or has a position value. If it is 0, the bit is only a place marker and has no position value. The term was contracted to “bit” in a Bell Systems Technical Journal in July 1948 by J.W. Tukey. The Bell System, named for telephone inventor Alexander Graham Bell, was key in developing communications standards.
Let’s get back to the question, what do bits and bytes do? When any device on a network sends data, it sends bits. We usually measure those bits as how many are going each second by saying things like “a hundred megabits,” which means 100 million bits per second.
That brings up another recent question, what’s the difference between “kilo,” “mega,” and “giga,” and what’s next?” The term “kilo” is a prefix that refers to 1,000 as in kilogram. “Mega” refers to millions and is usually abbreviated with a capital M (100 Mbps = 100 Megabits per second).
“Giga” is the prefix we use to identify 1,000 million. In the US, that would be a billion though in other countries, a billion is 1,000,000 million. The abbreviation is a capital G (40 Gbps = 40 Gigabits per second).
From there, the names for faster speeds are just names, for now. Some cables can do higher speeds, though we are waiting for the standards to catch up to them so vendors will implement those speeds.
Back to “bytes,” after a lot of different sizes from six to ten bits claiming to be a byte, it was settled that 8 bits equals a byte. Each of those bits has its own position value in the byte.
Starting with the lowest value (2 to the power of zero) as the least significant bit and moving to the most significant bit (2 to the seventh power), each bit position has a fixed value. This way, the range of values in a single byte is zero through 255. If a bit is set to one, that position’s value gets added to the total for the byte. If a bit is set to zero, that position is held and the value is ignored. This means a binary value of 01111001 is 64+32+16+8+1 and that is a decimal total of 121.
Each of the letters, digits, spaces, and special characters we use fit into its own individual byte by the American Standard Code for Information Interchange (ASCII, pronounced ass-key). The 121 value in the previous paragraph translates to a lowercase “y”.
Common places to find bytes used are in the data stored in a storage device, like a disk drive or a solid-state drive (SSD). Bytes are also used in measuring the speed of writing data and retrieving data that’s been stored on one of those drives.
Since bytes are larger than bits, the shorthand for a byte is a capital “B” and the lowercase “b” is used for bits. (i.e. 500 Gigabyte drive = 500GB).
Understanding Networking Fundamentals