tag:blogger.com,1999:blog-865923359735383241.post8257260234937820674..comments2023-10-29T07:27:09.012-06:00Comments on Ccna final exam - java, php, javascript, ios, cshap all in one: C#, int or Int32? Should I care?Unknownnoreply@blogger.comBlogger30125tag:blogger.com,1999:blog-865923359735383241.post-6540369586630228712012-06-01T22:29:06.174-06:002012-06-01T22:29:06.174-06:00Some compilers have different sizes for int on dif...Some compilers have different sizes for int on different platforms (not C# specific)<br /><br />Some coding standards (MISRA C) requires that all types used are size specified (i.e. Int32 and not int).<br /><br />It is also good to specify prefixes for different type variables (e.g. b for 8 bit byte, w for 16 bit word, and l for 32 bit long word => Int32 lMyVariable)<br /><br />You should care because it makes your code more portable and more maintainable.<br /><br />Portable may not be applicable to C# if you are always going to use C# and the C# specification will never change in this regard.<br /><br />Maintainable ihmo will always be applicable, because the person maintaining your code may not be aware of this particular C# specification, and miss a bug were the int occasionaly becomes more than 2147483647.<br /><br />In a simple for-loop that counts for example the months of the year, you won't care, but when you use the variable in a context where it could possibly owerflow, you should care.<br /><br />You should also care if you are going to do bit-wise operations on it.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-42327437605594767572012-06-01T22:29:05.147-06:002012-06-01T22:29:05.147-06:00Use of Int or Int32 are the same Int is just sugar...Use of Int or Int32 are the same Int is just sugar to simplify the code for the reader.<br /><br />Use the Nullable variant Int? or Int32? when you work with databases on fields containing null. That will save you from a lot of runtime issues.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-86304321392687011452012-06-01T22:29:04.287-06:002012-06-01T22:29:04.287-06:00A while back I was working on a project with Micro...A while back I was working on a project with Microsoft when we had a visit from someone on the Microsoft .net CLR product team. This person coded examples and when he defined his variables he used “Int32” vs. “int” and “String” vs. “string”. I had remembered seeing this style in other example code from Microsoft. So, I did some research and found that everyone says that there is no difference between the “Int32” and “int” except for syntax coloring. In fact, I found a lot of material suggesting you use “Int32” to make your code more readable. So, I adopted the style.<br /><br />The other day I did find a difference! The compiler doesn’t allow you to type enum using the “Int32” but it does when you use “int”. Don’t ask me why because I don’t know yet.<br /><br />Example:<br /><br />public enum MyEnum : Int32<br />{<br /> AEnum = 0<br />}<br /><br /><br />This works.<br /><br />public enum MyEnum : int<br />{<br /> AEnum = 0<br />}<br /><br /><br />Taken from: Int32 notation vs. intUserhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-1305439944957371112012-06-01T22:29:03.229-06:002012-06-01T22:29:03.229-06:00Thanks for the link, Ray. I'd always just ass...Thanks for the link, Ray. I'd always just assumed that when the .net framework moved to 64-bit, the int type would become 64 bit as well.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-5182349660664199522012-06-01T22:29:01.577-06:002012-06-01T22:29:01.577-06:00int is identical to Int32 - in the Microsoft compi...int is identical to Int32 - in the Microsoft compiler. It's just a shortcut for System.Int32. If you ever plan on doing some cross-platform C# / .NET development, you can't guarantee that int will be available for you. And for that reason it's always best to stick to the long-hand versions of the classes. e.g. Int32 / Int64 etc.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-915976202650593662012-06-01T22:29:00.321-06:002012-06-01T22:29:00.321-06:00One that noone has mentioned yet is Int16. If you ...One that noone has mentioned yet is Int16. If you need to store an Integer in memory in your app and you are concerned about the amount of memory used, then you could go with Int16 since it uses less memeory and has a smaller min/max range than Int32 (which is what int is.)Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-50415520368641380262012-06-01T22:28:58.002-06:002012-06-01T22:28:58.002-06:00It doesn't matter. int is the language keyword...It doesn't matter. int is the language keyword and Int32 its actual system type.<br /><br />See also my answer here to a related question.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-48279782666673605682012-06-01T22:28:56.711-06:002012-06-01T22:28:56.711-06:00You should not care in most programming languages,...You should not care in most programming languages, unless you need to write very specific mathematical functions, or code optimized for one specific architecture... Just make sure the size of the type is enough for you (use something bigger than an Int if you know you'll need more than 32-bits for example)Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-22898111659735428152012-06-01T22:28:53.106-06:002012-06-01T22:28:53.106-06:00The bytes int can hold depends on what you compile...The bytes int can hold depends on what you compiled it for, so when you compile your program for 32 bit processors, it holds numbers from 2^32/2 to -2^32/2+1, while compiled for 64 bit it can hold from 2^64/2 to -2^64/2+1. int32 will always hold 2^32 values.<br /><br />Edit : Ignore my answer, I didn't see C#. My answer was intended for C and C++. I've never used C#Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-46755493979616236672012-06-01T22:28:52.049-06:002012-06-01T22:28:52.049-06:00int is the C# version, whereas Int32 is the .NET (...int is the C# version, whereas Int32 is the .NET (CLR) version, usable in any .NET language. But there's no difference in what they do. I've heard it said that if you're writing idiomatic C#, you should use int. But personally, I prefer Int32 for clarity.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-65898397945955551922012-06-01T22:28:48.773-06:002012-06-01T22:28:48.773-06:00I'd recommend using Microsoft's StyleCop, ...I'd recommend using Microsoft's StyleCop, at http://code.msdn.microsoft.com/sourceanalysis<br /><br />It is like FxCop, but for style-related issues. The default configuration matches Microsoft's internal style guides, but can be customised for your project.<br /><br />Can take a bit to get used to but definitely makes your code nicer.<br /><br />You can include it in your build process to automatically check for violations.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-82556397801309556972012-06-01T22:28:45.901-06:002012-06-01T22:28:45.901-06:00I always use the aliased types (int, string, etc) ...I always use the aliased types (int, string, etc) when defining a variable and use the real name when accessing a static method:<br /><br />int x, y;<br />...<br />String.Format ("{0}x{1}", x, y);<br /><br /><br />It just seems ugly to see something like int.TryParse(). There's no other reason I do this other than style.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-32643945496848668102012-06-01T22:28:45.126-06:002012-06-01T22:28:45.126-06:00I use int in the event that MS changes the default...I use int in the event that MS changes the default implementation for a integer to some new fangled version (let's call it Int32b). MS can then change the int alias to Int32b and I don't have to change any of my code to take advantage of their new (and hopefully improved) integer implementation.<br /><br />The same goes for any of the type keywords.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-16067056912159973082012-06-01T22:28:44.431-06:002012-06-01T22:28:44.431-06:00It makes no difference in practice and in time you...It makes no difference in practice and in time you will adopt your own convention. I tend to use the keyword when assigning a type, and the class version when using static methods and such:<br /><br />int total = Int32.Parse("1009");Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-34477303281842305902012-06-01T22:28:43.591-06:002012-06-01T22:28:43.591-06:00Though they are (mostly) identical (see below for ...Though they are (mostly) identical (see below for the one [bug] difference), you definately should care and you should use Int32.<br /><br /><br />The name for a 16 bit integer is Int16, for a 64 bit integer it's Int64, and for a 32 bit integer the intuitive choice is: int or Int32?<br />The question of the size of a variable of type Int16, Int32, or Int64 is self-referencing, but the question of the size of a variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point).<br />Using Int32 promotes that the developer is conscious of their choice of type. How big is an int again? Oh yeah, 32. The likelihood that the size of the type will actually be considered is greater when the size is included in the name. Using Int32 also promotes knowledge of the other choices. When people aren't forced to at least recognize there are alternatives it become far too easy for int to become "THE integer type". <br />The class within the framework intended to interact with 32 bit integers is named Int32. Once again, which is: more intuitive, less confusing, lacks an (unnecessary) translation (not a translation in the system, but in the mind of the developer), etc. int lMax = Int32.MaxValue or Int32 lMax = Int32.MaxValue? <br />int isn't a keyword in all .Net languages.<br />Although there are arguments why it's not likely to ever change, int may not always be an Int32.<br /><br /><br />The drawbacks are 2 extra characters to type and [bug]<br /><br />this won't compile<br /><br />public enum MyEnum : Int32<br />{<br /> AEnum = 0<br />}<br /><br /><br />but this will<br /><br />public enum MyEnum : int<br />{<br /> AEnum = 0<br />}Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-34902466524868061662012-06-01T22:28:33.251-06:002012-06-01T22:28:33.251-06:00int is the C# language's shortcut for System.I...int is the C# language's shortcut for System.Int32<br /><br />Whilst this does mean that Microsoft could change this mapping, a post on FogCreek's discussions stated [source]<br /><br />"On the 64 bit issue -- Microsoft is indeed working on a 64-bit version of the .NET Framework but I'm pretty sure int will NOT map to 64 bit on that system. <br /><br />Reasons:<br /><br />1. The C# ECMA standard specifically says that int is 32 bit and long is 64 bit.<br /><br />2. Microsoft introduced additional properties & methods in Framework version 1.1 that return long values instead of int values, such as Array.GetLongLength in addition to Array.GetLength.<br /><br />So I think it's safe to say that all built-in C# types will keep their current mapping."Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-33850093184231283272012-06-01T22:28:31.922-06:002012-06-01T22:28:31.922-06:00Once upon a time, the int datatype was pegged to t...Once upon a time, the int datatype was pegged to the register size of the machine targeted by the compiler. So, for example, a compiler for a 16-bit system would use a 16bit integer. However, we thankfully don't see much 16bit any more, and when 64bit started to get popular people were more concerned with making it compatible with older software and 32bit had been around so long that for most compilers an int is just assumed to be 32 bits.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-43329464310851655262012-06-01T22:28:27.345-06:002012-06-01T22:28:27.345-06:00int is the same as System.Int32 and when compiled ...int is the same as System.Int32 and when compiled it will turn into the same thing in IL. <br /><br />We use int by convention in C# since C# wants to look like C and C++ (and Java) and that is what we use there...<br /><br />BTW, I do end up using System.Int32 when declaring imports of various Windows API functions. I am not sure if this is a defined convention or not but it reminds me that I am going to an external DLL...Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-26959906088194852532012-06-01T22:28:26.025-06:002012-06-01T22:28:26.025-06:00I know that the Best Practice is to use int, and a...I know that the Best Practice is to use int, and all MSDN code uses int. However, there's not reason beyond standardisation and consistency as far as I know.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-7574535832702666382012-06-01T22:28:25.111-06:002012-06-01T22:28:25.111-06:00You shouldn't care. You should use int most of...You shouldn't care. You should use int most of the time. It will help the porting of your program to a wider architecture in the future (currently int is an alias to System.Int32 but that could change). Only when the bit width of the variable matters (for instance: to control the layout in memory of a struct) you should use int32 and others (with the associated "using System;").Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-23436509220883286342012-06-01T22:28:24.121-06:002012-06-01T22:28:24.121-06:00In my experience it's been a convention thing....In my experience it's been a convention thing. I'm not aware of any technical reason to use int over Int32, but it's:<br /><br /><br />Quicker to type.<br />More familiar to the typical C# developer.<br />A different color in the default visual studio syntax highlighting.<br /><br /><br />I'm especially fond of that last one. :)Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-29651417225511226082012-06-01T22:28:23.001-06:002012-06-01T22:28:23.001-06:00There is no difference between int and Int32, but ...There is no difference between int and Int32, but as int is a language keyword many people prefer it stylistically (just as with string vs String).Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-16518004735107813322012-06-01T22:28:22.129-06:002012-06-01T22:28:22.129-06:00Byte size for types is not too interesting when yo...Byte size for types is not too interesting when you only have to deal with single language (and for codes which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another or C# to COM object, etc, or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data.<br /><br />In practice, I usually use Int32 just to remind myself what size they are because I do write Managed C++ (to bridge to C# for example) as well as Unmanaged/native C++.<br /><br />Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so.<br /><br />With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer").<br /><br />In F# (although I've never used it), they define int, int32, and nativeint. Same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities. <br /><br />I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc to justify when to use Int32.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-76994792339418083882012-06-01T22:28:21.356-06:002012-06-01T22:28:21.356-06:00int is a C# keyword and is unambiguous.
Most of...int is a C# keyword and is unambiguous. <br /><br />Most of the time it doesn't matter but two things that go against Int32:<br /><br /><br />You need to have a "using System;" statement. using "int" requires no using statement.<br />It is possible to define your own class called Int32 (which would be silly and confusing). int always means int.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.comtag:blogger.com,1999:blog-865923359735383241.post-35132588992515972272012-06-01T22:28:20.505-06:002012-06-01T22:28:20.505-06:00You should not care. If size is a concern I would...You should not care. If size is a concern I would use byte, short, int, then int64. The only reason you would use an int larger than int32 is if you need a number higher than 2147483647 or lower than -2147483648.<br /><br />Other than that I wouldn't care, there are plenty of other items to be concerned with.Userhttps://www.blogger.com/profile/11557173689529910046noreply@blogger.com