I will write another post giving more of my thoughts on C++11, but I should mention right off the bat here that I think C++11 has a lot of really cool features that greatly improve the language. Developing modern code in C++ is not a bad thing, but the legacy of C++ means that not all code written today has to use the modern style, and worse, the code currently in existence doesn't use these features at all. Plus, even with the improvements C++ still uses pretty much the same tool chain as C and that is a bit of a problem.
How They DifferI've always known that import and #include do different things. I make sure to point this out to students any time I am teaching a class where I can compare languages that use these two different features. However, working on a large C++ code base made the difference really stick out. What I came to realize is that #include causes a problem because it impacts the structure of code. This isn't an issue with import because it does nothing more than tell the compiler how to resolve short names in a source file into fully specified names.
The difference becomes more obvious when you run into situations where the order of #includes in a source file is important. I have to point out that if one follows all the normal best practices this never happens. Unfortunately, not everyone has followed these practices. In particular, there are Windows libraries that do things which break the rules and cause the order of includes to matter. The Windows libraries also have odd behaviors where you aren't allowed to include some files directly because they depend on other things being defined first. This can be particularly challenging when you are working on a project and the IDE tools do a great job of telling you exactly what header file defines something you need, but you aren't allowed to include that file because Microsoft did something non-standard in building their libraries. (This comes up a lot with their typdefs of BOOL, TRUE, and FALSE. That topic is probably worth a whole blog post to rant about.)
In some ways, I feel that the real problem is that header files can, and often must, include other header files. Because of this, putting a single #include in a source file can result in 100 or more other headers being included. Mix in a few #ifdef or #ifndef directives and things quickly become a complete mess where order matters a lot.
How This HappensNow it is easy to throw stones at previous developers (including those for Windows) and say they just didn't know what they were doing. Inevitably there are situations where previous developers made some poor decisions that led to structural code problems in the headers. However, many of these things can creep into code over time and maintenance programmers can easily add them in not realizing what they are doing. The reason is that some flaws in code related to #includes and headers are hard to track down unless you have a powerful static analysis tool to help you. For example, files should #include all the things that they use and not things that they don't. Sounds like a simple enough rule to follow when you are the original author of a file. The compiler won't like your code if you don't #include things you are using, and unless you just have a bad habit of adding lots of #includes at the top of every file because you "might use them" you aren't going to put in extra stuff.
However, even with the original author there can be some challenges if you rely on the compiler to tell you when you are including everything that you need. If you have one header file that includes a lot of others, it is possible you might include that one file and forget to include the others directly even though you use things in them. This doesn't sound like a problem until you, or someone else, makes a change in what that one header file includes and your source files break because it wasn't doing its own includes directly. Relying on one file to do things for you that aren't really part of its job description is generally a great way to give yourself headaches later on.
When you consider the situation of the maintenance programmer, things get much worse, especially if the code was a bit smelly to start with. It is easy to add code and just say that if it compiles everything is happy. It takes time and effort to go to the top of the file and see if there is already a #include for the things you just added in. The time and effort grow if the file is longer than it should be. Not only do you have to jump farther from your current editing point to check, but the length of the #include list generally grows as well.
The problem is even worse when you are deleting a function, method, or even a few lines of code. Figuring out if you just deleted the only reference to something in a particular #include is not a trivial task. As a result, #includes, once added, are unlikely to never go away until someone decides to spend some real time doing cleaning or if you have a static analysis tool powerful enough to tell you that some particular #include is no longer needed.
It Made Sense for CSo why was such an odd system put into place to begin with? Well, it made sense for C, which ran on limited systems and very strictly followed the requirement of everything happening in a single pass. There were lots of hardware reasons why the original C compilers needed to go through their programs in a single pass from top to bottom and not store lots of extra tables for things along the way. When your program is stored on a tape (or punch cards) and your machine has limited memory, you don't want to have to run through the source multiple times in the process of compiling.
What changed?Of course, most of those reasons are completely moot on a modern machine. This is why we have seen a march of programming languages that move more and more of the work onto the compiler. Focusing on Java, the import statement doesn't actually do anything to the code, it just tells the compiler how to resolve short names into the longer, fully specified names that they represent. (Honestly import is really the equivalent of using in C++, not #include.)
Faster machines with more memory and the fact that you were never compiling something stored on a tape made multiple passes and random access far more acceptable. So you don't have any need to have the preprocessor spit out something that can be handled in one pass from top to bottom. You don't mind if the compiler has to go looking in separate files. In fact, that can be faster. The whole idea of precompiled headers in C and C++ only exists because opening and processing the same header files over and over for every source file in a large project can really slow things down. Losing the concept of a header file removes that overhead from the compiler. (I also really appreciate that it removes code duplication. Having to touch two files any time a function signature is adjusted has always annoyed me.)
Making import workIn Java, a significant piece of what made this possible working with the computers available in the mid 90s was that there were very specific rules that had to be followed related to file names and directories. The fully specified name in Java tells the compiler exactly where it should go looking for something. So all the compiler has to do is figure out the fully specified name and that is exactly what import statements do in the code, allow short names to be turned into fully specified names.
Newer languages have relaxed some of Java's strict naming rules and have put even more burden on the compiler. Scala comes to mind as a key example of this. It compiles to the JVM and the .Class files are placed in the restricted directories required by the JVM, but the compiler actually takes its cues from the source code. Of course, it is generally recommended that you follow the Java scheme because it makes it easier for programmers to find where things are as well.