The Technofile Web site has moved.

Technofile is now located at
Please update your links, bookmarks and Favorites.  

What bits really are, and why they're not characters

technofile  by al fasoldt

Columns and commentaries in a life-long dance with technology

Simple gray rule

What bits really are, and why they're not characters

By Al Fasoldt

Copyright © 1989, The Syracuse Newspapers

Something's been driving me a bit crazy.

A guy who writes a computer column for one of the big news services keeps making the same mistake week after week.

He keeps talking about the "characters" that are stored in computers and disks.

I can guess why he does this—he wants to get around the problem of using computer jargon—but I sure wish he'd get things straight. Readers are confused enough already.

Let's hit the help key here and set things straight.

Computers don't work with characters. They work with numbers.

Not just any numbers. They use special ones, called "bits."

Bits are beautiful in their simplicity. A bit is the easiest concept in all of math. It's something that is either on or off, either here or there, either rain or shine.

It's like a light switch. If the switch is off, the bit is 0. If the switch is on, the bit is 1.

You use this kind of counting method all the time, except that you probably don't think about it.

Did you watch "The Cosby Show" this week? If the TV was off, the bit for "Cosby watching" is 0. If it was on, the bit is 1.

With only 0 and 1 to choose from, you can't count very far. You wouldn't be able to tell me if you saw two Cosby shows, a new one and a rerun, for instance.

So you usually string bits together. That's just what computer designers did a long time back. They put eight bits together and created a "byte."

So a byte is eight bits. Think of it like regular weights and measures—eight ounces to a cup, eight bits to a byte.

Unfortunately, it's not as easy to follow the next step. Computers count with bits and bytes by using binary arithmetic. Binary, like bicycle and biplane, means two of something—in this case, two numbers. Seems simple, right?

Well, may you be so lucky some other time. Binary math is anything but simple.

For example, we usually refer to bytes in their larger form, called kilobytes. Kilo means 1,000. So that's 1,000 bytes, yes?

Sorry, it's not. That is, not really. The kilobyte is the baker's dozen of computer measurements. Binary counting throws in a few extra bytes for free.

In other words, for every 1,000 bytes in a kilobyte, the binary numbering system adds 40 more.

Maybe that doesn't sound like much, but it can add up to quite a few extra bytes if you're working with a lot of computer memory. A megabyte—a million bytes, meaning 1,000,000 in the kind of counting we're used to—is 1,024,000 bytes to the computer, or 24,000 bytes more than our familiar million.

That's enough extra memory for a couple of newspaper computer columns written on a word processor.

Which brings me back to the writer from the news service. When he types his column, what shows up on his screen isn't a bunch of bits or bytes, but characters. He sees letters and words.

The characters on his screen are made up of codes. An "E" has a different code from a "W," and so on. Seven bits are all that are needed for each character. To keep things simple, computers usually add one more bit to that code to make a byte.

So that's where the confusion comes in. Characters and bytes are each made up of eight bits, but that's all they have in common. Characters are what show up on the screen, but bytes are what the computer works with.

Sometimes, of course, characters are what show up at your door trying to sell you something, but that's, ahem, a bit different, if you know what I mean.

 Image courtesy of Adobe Systems Inc.technofile: [Articles] [Home page] [Comments:]