Tuesday, 27 August 2013

Why do functions take an int as an argument when its value logically can not be < 0

Why do functions take an int as an argument when its value logically can
not be < 0

When I am writing functions that takes an arguments that determine a
certain length, I always use uint. As it makes no sense for the value to
be a negative number.
But I see the opposite (very often) in the .Net framework. For example:
http://msdn.microsoft.com/en-us/library/xsa4321w.aspx
Initializes a new instance of the String class to the value indicated by a
specified Unicode character repeated a specified number of times.
public String(
char c,
int count
)
It also states that an ArgumentOutOfRangeException is thrown when `count
is less than zero."
Why not make the count argument an uint then!?

No comments:

Post a Comment