I am C++ programmer. While during surfing i became familiar with the terms like undefined behavior, unspecified behavior, implementation defined behavior etc. I am wondering why some things left undefined? I know that C++ is close to hardware but is it really hard to define a behavior for some of them? If so why they didn't do it? This question is about the philosophy behind leaving out so many behaviors open for compiler implementation. I think & believe that there are 2 following advantages of this:
1) It allows better performance. It simplifies the job of compiler, making it possible to generate very efficient code in certain situations.
2) It allows flexibility to compiler vendors to implement it in their own way.
If you know of any other factors other than performance please tell me. Your help will be highly appreciated.
Thanks
Besides the reasons you already mentioned, one important thing is to understand that CPU's differ. They still do, but it used to be worse. Trying to specify exactly how C++ behaves in corner cases is just not helpful. E.g. 0/0
will be handled differently by different CPU's, and it doesn't really matter for real programs.
Another issue is that some UB is hard to detect . For instance, violations of the One Definition Rule across Translation Units would require support in the linker phase, and there's been quite some tolerance for vendors who rely on primitive linkers.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments