Binary: Difference between revisions

From Computer Science Wiki
Line 42: Line 42:


<div class="toccolours mw-collapsible mw-collapsed">
<div class="toccolours mw-collapsible mw-collapsed">
Define the term:  '''bit'''? Click the expand link --->  
Define the term:  '''bit''' Click the expand link to see the answer, but make sure you have tried to answer first!--->  
<div class="mw-collapsible-content">
<div class="mw-collapsible-content">
A bit, is the basic unit of information in computing and digital communications.  A bit can have only one of two values, and may therefore be physically implemented with a two-state device. These values are most commonly represented as either a 0 or 1. The term bit is a portmanteau of binary digit.<ref>https://en.wikipedia.org/wiki/Bit</ref>
A bit, is the basic unit of information in computing and digital communications.  A bit can have only one of two values, and may therefore be physically implemented with a two-state device. These values are most commonly represented as either a 0 or 1. The term bit is a portmanteau of binary digit.<ref>https://en.wikipedia.org/wiki/Bit</ref>
Line 51: Line 51:


<div class="toccolours mw-collapsible mw-collapsed">
<div class="toccolours mw-collapsible mw-collapsed">
Define the term:  '''byte'''? Click the expand link --->  
Define the term:  '''byte''' Click the expand link to see the answer, but make sure you have tried to answer first!--->
<div class="mw-collapsible-content">
<div class="mw-collapsible-content">
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.<ref>https://en.wikipedia.org/wiki/Byte</ref>
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.<ref>https://en.wikipedia.org/wiki/Byte</ref>

Revision as of 10:02, 19 April 2016

This is a basic concept in computer science

In mathematics and digital electronics, a binary number is a number expressed in the binary numeral system or base-2 numeral system which represents numeric values using two different symbols: typically 0 (zero) and 1 (one). The base-2 system is a positional notation with a radix of 2. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by almost all modern computers and computer-based devices. Each digit is referred to as a bit.[1]


Binary[edit]

This is one of the better videos I've seen on binary.


Binary Translation table[edit]

I find it helpful to draw this table when I must convert binary to base 10.

128 64 32 16 8 4 2 1


How to add two binary numbers[edit]

Adding binary is straight forward. Line up the numbers as you would if you were adding base-10 numbers.

Remember this:

0 + 0 = 0

0 + 1 = 1

1 + 0 = 1

1 + 1 = 10, so write a 0 and carry the 1 to the next column.

What you must know[edit]

You must be able to correctly answer the following questions:

Define the term: bit Click the expand link to see the answer, but make sure you have tried to answer first!--->

A bit, is the basic unit of information in computing and digital communications. A bit can have only one of two values, and may therefore be physically implemented with a two-state device. These values are most commonly represented as either a 0 or 1. The term bit is a portmanteau of binary digit.[2]


Define the term: byte Click the expand link to see the answer, but make sure you have tried to answer first!--->

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.[3]

binary

denary/decimal

hexadecimal

Click here to test yourself

Why is this so important?[edit]

If we can represent numbers as 1 and 0, why not represent numbers as on and off? If we can represent letters as numbers (A = 65, B = 66) couldn't we also say A = 01000001 and B = 01000010?

Binary representation is the essence of how computers work.

Resources[edit]

Click here for a slide deck that covers this topic nicely