These are some excellent answers submitted by Alexander Moore: -------------------------------------------------------------- 1. Complexity is a good argument both for and against partial models. Complexity forces any models we create to be partial. When modeling a bridge, there is no way to create a complete model -- you can create a very high-level model that will reflect people walking across the bridge, or a very low-level model that will examine the materials the bridge is made from, but current technology and time restrictions preclude you from creating a model that covers these two situations and all other possible interactions that make up a bridge. At the same time, models may break because their creators neglect to make them complete enough. Smith presents the example of a nuclear radar control system that evaluates the possibility of a Soviet nuclear missile attack. Because the model the system architects created did not cover reflections from the moon, the system failed. Most partial models are vulnerable to errors of omission, and the exact granularity of the model needed for a specific task is very hard to determine. 2. Myth: Formal methods are all about program proving. While I have no experience with formal methods (outside of the first 3 weeks of this class), the point Hall makes seems coherent with my experience. Hall says that a large part of the value of formal methods is that the act of generating a formal specification for a program requires so much intense thought that it forces designers to create good abstractions and architectures. In my experience, the more time I spent thinking about and working with an architecture, the more sophisticated and elegant my architecture became. Specifically, when I jumped right into coding (the "hack and whack" method, as one of the papers describes), my code required so much redesign and refactoring that I essentially wrote those modules 3-4 times. When I thought about the architecture long and hard before I began coding, I rarely had to redesign from scratch. Even better results came when I built prototypes, threw them away, and then designed the system. These systems were simple enough that the prototypes took more time than retrofitting the design would have taken (they were required for the class), but the prototypes gave me enough insight into the problem that I was immediately able to design a better architecture than I could from scratch. It seems consistent that formal methods would provide at least the same benefits as a quick prototype, and quite possibly much better understanding of the problem. This is valuable. 3. "Specialisation is the inevitable precondition and accompaniment of this evolution of successful designs." Jackson means, essentially, that progress is incremental and that as incremental progress leads to increased complexity, engineers must specialize in more and more detailed areas of design. For instance, an electrical engineer could design power converters, vacuum tube circuits, transmission lines, etc. when the field was new. All of the primitive versions of these separate fields in electrical engineering were simple enough for an electrical engineer to know a lot about all of them. As the field became more mature, however, research in each one of these areas made the designs more and more complex. As such, engineers picked a discipline, be it device physics or signal processing, to study in much more detail than the rest. Their research carried the field further and further, and in today's world, digital audio processing is a completely separate field from analog power conversion. However, as a result of the specialization of electrical engineers, we do both much better than they did in the 1960s. So the cycle continues. Jackson's point is that formal methods will not become a single, universal idea that can easily be generalized to make any and all software perfect. Instead, they will become tools much like a table of wire gauges for designing an inductor. We will see the development of a number of formal method solutions for resolving specific problems, and they will be much simpler to apply than current formal methods. Formal methods will be a single field of specialization, and other engineers will draw from the work of the formal methods specialists, instead of every developer learning to adapt formal methods specifically to his own work. 4. Hoare does not make the comparison between software developers and scientists, and I think he avoids the comparison intentionally. I think he is trying to describe the programmers who develop software in the business world, rather than the academic programmers who perform research on better ways to build and use software. I think a comparison between business programmers and physicists doesn't hold water in this context. I can't remember the last time I heard of a business consulting their local astrophysicist to help solve one of their problems. And while biomedical companies base their products in large part on academic biology research (as software companies base many of their techniques on the products of academic computer science research), the point of this paper, in my mind, was to evaluate how clients should view software developers. Hoare pointed to a number of weaknesses in trying to view developers as engineers - a lack of sufficient tools, reliability issues that stem from the complexity and youth of the field, etc. I think trying to view the software developers that work in business as scientists would result in such a broken metaphor that Hoare does not consider it. If he did, however, I think he would point to the inexplicability of what computer scientists do as the basis for comparison. 5. Rushby believes that formal methods are undervalued if they are used mainly to force developers to think about their architectures in enough detail to design good solutions before development begins. He believes that the power of formal methods really lies in their ability to automate verification of the models that form the software architecture and their ability to automatically review code to make sure it meets the (verified!) models. He believes that the technology to at least extend beyond basic type-checking will be here and easy to use very soon. These tools will be initially able to direct human verification to only the areas that computers will be able to verify independently, making better and more efficient use of test developers. 6. - The purpose of a programming system is to make a computer easy to use. (page 42) - Simplicity is not enough. (page 44) - the conceptual integrity of a system determines its ease of use. (page 46) - discipline is good [for] art ... "Form is Liberating" (page 46-47) Resolving these isn't all that bad. The idea behind a programming system is, actually, to make it easier to write programs for a computer. Otherwise we'd just use machine code for everything. So that doesn't conflict. Simplicity is not enough also makes sense. Just because a design has few pieces doesn't necessarily make it straightforward or easy to use. To give an example - I found using a laser cutter significantly easier than using a mill. A mill is a much simpler device than a laser cutter, but it is not more straightforward, nor is it better. Likewise, a simple piece of code isn't necessarily a better tool than a complicated one. If I need to do a bode analysis of a circuit, I would much rather have MATLAB, which has a tool for it, than a simpler library. The same thing happens with programming languages. The conceptual integrity determining the ease of use is a little bit fuzzier. Brooks's point is that if a single mind designs a system, the system will flow better than if several minds design different pieces of the system. Essentially, when multiple people create interfaces, there will be inconsistencies in the design that will inhibit the ease of use. That makes sense. Finally, the idea that discipline is good for art is embodied by a cathedral. The beauty of a cathedral (read Pillars of the Earth, Ken Follett, it's really great and I just finished it) is in the symmetry and the ratios. Similarly, the thing that makes a lot of modern art so worthless is that there is no discipline. Taking a brush and slopping it all over canvas without even looking does not create something beautiful, contrary to my evil ex-girlfriend's attempts to convince me otherwise. So the finest program is one that is disciplined, architected by a single intelligence, with ease of use prioritized over simplicity, with simplicity still appreciated. 7. The points that I found most surprising in Beck's article were: -- No duplicates of anything. I expected that XP would embrace occasional duplication when it makes something clearer or easier to program. I don't necessarily know that I agree that no duplication always is the simplest thing. -- The use of a metaphor to describe a programming project. It sounds silly to require that there be a metaphor like "oh, it's a spreadsheet." -- Collective ownership. This seems like it would allow people unfamiliar with a section of code to overwrite a very elegant piece of code with something "simpler" that doesn't perform as well or that will be a step backward in further development. -- Refactoring as design. I feel like there should be a middle ground, sort of like the spiral model, and it surprises me to see that refactoring can replace design and create any functional system at all. I feel like the things that might be missing from Beck's approach are: -- System-wide tests. In the XP-bashing article, it turns out that these tests are left to the customer. Brilliant. -- Prototypes. The only way XP would allow prototyping is to put something into working code then refactor as needed. That sounds dangerous. -- Interaction of code with outside materials. A lot of software has to interact with something besides a person. Unit tests would be really hard to write when you're, say, controlling lab equipment. 8. I thought Rosenberg's most compelling point was the one case study that seemed to reflect a legitimate experience with XP. David Van Der Whatever explained how his company wrote several tests, then found out that these unit tests did not actually reflect anything hit by the menu they were trying to test. It showed a possible weakness in XP. However, Rosenberg's poor writing and asinine approach undermined even this potentially legitimate point. I don't know what YAGNI is. He didn't explain. Further, he didn't mention that part of the XP paradigm is that the authors of the functions are supposed to write their own tests, which would have prevented this particular problem from occurring. And this is the most compelling thing in the whole paper. I thought his least compelling points were the childish "satire" and songs that he interspersed into his text. Showing the intellectual maturity of a five-year-old in a technical book does not enhance his argument. Instead, it makes him look like a bitter man who has lost the debate and has to resort to name-calling to maintain his viewpoint. I don't think XP has reached such a level of acceptance that the only defense against "XP is the best thing ever" is parody of Beatles' songs. I'm kind of surprised this ever got published. 9. Rushby and Smith articulate very different positions on the future of formal methods. Rushby believes that formal methods will eventually become ubiquitous as the power of automatic verification becomes easier and easier to use. He believes that the combined attractiveness of a better initial design and the power to make a very substantial claim (if not a 100% proof) of correcness will make formal methods an integral part of developing good software. He believes that the power of these tools will dramatically improve software as well. Smith agrees that formal methods can help make software better, but he thinks that they address only some of the problems that plague software. He believes that more critical problems come into play with modeling and attempting to create an architecture that represents the real problem. Essentially, formal methods can only guarantee that the program does what the program does, not what it should do. There are a host of separate issues that formal methods does not address, and these problems will continue to make software difficult to design. In my mind, both authors are correct. Automatic verification will probably be a reality someday. It will help make programs more reliable, less buggy, and easier to design. On the other hand, it will never protect from modeling flaws or prevent errors from being made at a higher level. If the specification is wrong, the analysis will prove the code correct, but it will not prove that it works right. I do not believe that throwing away design a' la XP is the correct solution to either of the issues these authors present.