Counting from zero is a very common practice in many computer languages, but why? Read on as we explore the phenomenon and why it is so widespread.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader DragonLord is curious about why most operating systems and programming languages count from zero. He writes:Computers traditionally tally numerical values starting from zero. For example, arrays in C-based programming languages start from index zero.Why indeed? As widespread as the practice is, surely there are practical reasons for its implementation.
What historical reasons exist for this, and what practical advantages does counting from zero have over counting from one?
The Answer
SuperUser contributor Matteo offers the following insights:Counting arrays from 0 simplifies the computation of the memory address of each element.If you’re looking to delve deeper into the answer, the Dijkstra paper is an informative read.
If an array is stored at a given position in memory (it’s called the address) the position of each element can be computed as
If you consider the first element the first, the computation becomeselement(n) = address + n * size_of_the_element
Not a huge difference but it adds an unnecessary subtraction for each access.element(n) = address + (n-1) * size_of_the_element
Edited to add:
- The usage of the array index as an offset is not a requirement but just an habit. The offset of the first element could be hidden by the system and taken into consideration when allocating and referencing element.
- Dijkstra published a paper “Why numbering should start at zero” (pdf) where he explains why starting with 0 is a better choice. Starting at zero allows a better representation of ranges.
No comments:
Post a Comment