To understand the difference between Ethernet II and 802.3, you first must know how Ethernet works. While Ethernet cables transmit data, their role is relatively simple compared to that of the Ethernet card — also referred to as an adapter. It’s within the function of this card that you find the differences between Ethernet II and 802.3.
But here’s the short answer:
The biggest difference between Ethernet II and 802.3 are the fields of their Ethernet headers. Keep reading for more details.
What an Ethernet Adapter Does
An Ethernet adapter, or card, takes data that your computer is sending and formats it so it can be transmitted and understood by the computer or other device to which it’s going. In a way, an Ethernet card is a lot like a phone. Think of the data you’re sending from your computer, like someone’s voice before it goes into the phone. Obviously, it can’t be sent as-is because it’s a bunch of sound waves, and these don’t travel well through a wire.
To overcome this obstacle, the receiver inside the phone reads these sound waves and turns them into data a phone on the other end can understand. This enables the other person’s phone to take the data of what you said and turn it back into sound waves.
An Ethernet adapter does this with your computer’s data, as well as thousands of data centers and servers around the world. The data you’re sending is formatted and segmented into frames. Each frame contains information, such as where the data is going, where it’s coming from, and its size. Ethernet also checks for errors using a frame check sequence (FCS).
The frames generated by an Ethernet system as it transmits your data have headers. This is where the destination address, source address, and Ether Type or length are specified. The Ether Type simply indicates the kind of Ethernet used to send the data. This information is used by the receiving device to figure out how to process the information being sent. Here’s a more detailed breakdown of Ethernet protocol types.
Ethernet Protocol Types
The Ether Type field contains two bytes of hexadecimal code. This table translates the codes to some of the more commonly used codes. There are many codes, however, and some have been developed by private corporations to make their data transmissions run smoother.
For example, AppleTalk has two Ether Types, 0x809B and 0x80F3, which are used during the app’s data transmissions. There are also Ether Types that enable common services, such as Multiprotocol Label Switching (MPLS), which has one Ether Type for unicasts (0x8847) and one for multicasts (0x8848).
The length field, on the other hand, is far simpler. It tells the device receiving the message how long the entire frame is. This is also used by the receiving device to process the data. However, it’s the header that we’re mainly interested in when discussing how Ethernet II and 802.3 are different.
The Difference Between Ethernet II and 802.3
The biggest difference between Ethernet II and 802.3 are the fields of their Ethernet headers. The important distinction between Ethernet II and IEEE frames is that the Type field in Version II has been replaced with a 2-byte Length field in the IEEE formats. Ethernet II is much more popular for reasons that I’ll make clear shortly. Its header fields are:
- Preamble. This offers synchronization since both sender and receiver interface cards run with different system clocks.
- Start frame delimiter. This tells the Ethernet software where to start reading the frame.
- Destination address. The Media Access Control (MAC) address tells where this frame is supposed to go.
- Source address. The MAC address of the sending device.
- Type field. This sets the kind of packet that is in the data field. It’s also called Ether Type.
- Data field. This carries the data application data plus networking overhead.
- Frame check sequence. The sending NIC does a calculation on the bit stream and puts the result in this field. The receiver of the frame then does the same calculation on the bit stream that it received and compares the two values. If the bit stream has been changed, the match will fail and the frame will be thrown away.
What is Ethernet 2?
Ethernet 2 (also known as “Ethernet ii”, “Ethernet Version 2”, or “Ethernet 802.3”) is a standard protocol used across all the parts of networking equipment, regardless of the manufacturer. It was developed by the IEEE (Institute of Electrical and Electronics Engineers).
802.2 vs. 802.3
802.3 and 802.2 do not refer directly to physical architectures, but the format of the layer 2 Ethernet frame.
- 802.2 is the default frame type for Netware 3.12 and 4.x. 802.3 is used for Netware 3.11 and earlier.
- 802.3 is a bit like Novell 802.3 raw + 802.2 LLC, created by IEEE for its own Ethernet specification. Hence, it came to be known as Ethernet 2.
What are the Frame Formats from the IEEE?
There are three frame formats from the IEEE: IEEE 802.3, IEEE 802.3 with SNAP, and 802.3 with 802.2. Modern operating systems can send and receive any of these frame formats.
Why is Ethernet 2 More Popular Among Managers?
To run TCP/IP over IEEE 802.3, the SNAP format has to be used. That requires eight bytes of the data field to identify the kind of data the frame is carrying: three bytes for the Logical Link Control, three bytes for the SNAP header, and two bytes for the Protocol Type field. That means the data field shrinks from the standard range of 46 to 1,500 bytes down to a range of 38 to 1,492. This is the reason most network managers stay with Ethernet II.
Why Ethernet II Can Make a Difference
When Ethernet was invented, it boasted a relatively modest throughput of 10 megabytes per second (mbps). Back in 1976, this was an impressive feat, and because the amount of data that had to be transmitted was small, it got the job done. A quick look at some simple data transmission math explains why the eight bytes taken up by 802.3 can make admins choose Ethernet II.
Suppose you’re sending data that is 100 gigabits in size over Ethernet. Your Ethernet card in your computer, as described above, has to convert the data and send it via frames. Due to the way Ethernet works, each frame takes a certain amount of time to send, and while this is small, it can become a significant factor, especially if the data has to be broken into too many frames.
In our example, when sending 100 gigabits of data, if we use Ethernet II, we can have frames as large as 1,500 bytes each. For the sake of this example, we’re going to assume each frame is at its maximum length, 1,500 bytes. Therefore, a 100-gigabit payload of data will be divided into 66,667 individual Ethernet transmissions. While that’s a lot, it’s significantly less than what would be needed if each payload had to be 8 bits smaller, or 1,492 bytes.
If each Ethernet payload had to be sent over 802.3, necessitating a max transmission size of 1,492 bytes, our 100-gigabit chunk of data would need to be sent using 67,025 individual Ethernet frames. That’s a difference of 358 frames. But why would that even matter? Two reasons: Time and the integrity of the transmission.
Why Time Could Be a Factor
The amount of time it takes to send Ethernet packets, on the surface, is negligible. If you’re waiting for a large document, an email, or a webpage to load, the time that passes between each frame is so minuscule you won’t notice it.
On the other hand, if an organization depends on a constant stream of data going to and from users, even small delays can impact the end-user experience. In applications that provide split-second computations that influence everything from how a self-driving car turns to when a trade is placed on the stock market, fractions of a second can mean the difference between success and failure—or even safety and danger.
Also, if a cloud service provider needs to meet the requirements of an SLA (service level agreement), the amount of time it takes to send data to and from users can mean the difference between satisfied or unsatisfied clients. Retaining as many loyal customers as possible may come down to the speeds at which data can be sent.
Why Data Integrity Can Be a Concern
Each time data gets sent through an Ethernet connection, the system has to check if there is other data in the pipeline with which your data may collide on the way. If there is, an error is returned, and your Ethernet system will pause its transmission. If the transmission is paused enough, the receiver may experience noticeable drops in quality.
Again, in most situations, the difference in the chance of a data integrity issue popping up for 358 frames, for example, is usually small. But as the sizes of transmissions climb into many gigabytes, terabytes, and higher, the chances of there being a problem rises accordingly. This is often enough to make admins bypass the use of 802.3.
Ethernet II vs. IEEE 802.3
An ethernet card, or adapter, takes the data you’re sending and puts it into frames that can be read by the computer that’s receiving the data. With Ethernet 802.3, there is slightly less space available for data after the framing process has been completed.
For this reason, some managers prefer Ethernet II—not because it’s better at transmitting data but because it has a higher limit as to the amount of data that can be transferred within each frame.
For users concerned about how much data can be sent within each frame, Ethernet II may be a preferable option. For people whose transmissions aren’t as dependent upon the speed at which data can be sent, the speed differences between Ethernet II and 802.3 may be negligible.