I'll admit it - getting acclimated to the world of objects is a shock. Not since the advent of structured programming has a solution been so highly touted as The Answer for the many problems of crafting software. With the advent of Visual FoxPro, all of us who have been using FoxPro to construct database applications were catapulted into the totally new environment of object technology.
This was at the same time exhilarating and terrifying. Surely OO is one of the most exciting new technologies to enter the sphere of application development since the PC.
But its also more than a little intimidating. For Xbase developers, OO is without a doubt the largest leap from a prior version we've ever seen. How are you going to absorb it all? Just how soon can you get productive with the new environment? What if you make a disastrous architectural mistake that forces you to rewrite your first big VFP project? Being a little intimidated is certainly understandable.
This session is intended to help FoxPro developers new to the object oriented mindset make the transition to working with objects most effectively. It shares some of the “big picture” perspectives which I hope will help you get the most bang for your buck with object development.
To do this, this session focuses on a few basic principles:
OO truly does represent a radically new mindset and perspective on programming. What the marketing mavens of software haven't told you is that it won't be easy to grasp at first. By extolling OO's virtues as loudly and as often as possible (who can blame them - their job is to sell the stuff, not use it) they unknowingly exacerbate our worst fear: “Everyone else thinks this OO stuff is easy - but I just don't get it. I must be stupid.”
So what do we do? Slink off to the bookstore to buy some instant wisdom on OO. What do we find there? Shelves loaded with titles on OO! And what wisdom do we gain when we sneak home like high-schoolers with our Cliff notes? Two things:
The terminology: Bullet definitions of a half dozen multi-syllabic Greek terms like polymorphism and encapsulation (no matter which book you read, it seems the definition was copied from some other book - they all sound the same, and it makes your eyes glaze over).
The example: “This is a circle. It's an object. Let's subclass it. Now it's a red circle. Let's do that that again. Now it's a red ellipse! Got it yet?!? See how easy this is?”
This drives me crazy. I'm a database programmer. I build information systems for a living. Customer records, inventory records, pick lists, calculations, reports. Exactly how is this red ellipse relevant to my problem?
In truth, I think OO will revolutionize the way we write software. Even more important, it should radically alter the way we live with what we've written. It should facilitate the way parts are assembled into solutions, and greatly reduce the amount of code we have to churn out to craft a solution to a problem.
But this change won't come easily to those of us who face the task of relearning nearly everything we know about procedural programming. It won't come overnight, and it won't come effortlessly, particularly if we're kidding ourselves about the fundamentals. How long did it take you to learn how to be a programmer in the first place? Did it dawn all at once, or was it a gradual process of “aha's” built one upon the other? Expect your personal Object Orientation to develop in that way -- gradually. The sooner we come to the resolution to proceed gently and patiently to explore this new world with fresh eyes, the sooner we can get down to business.
One thing for certain is that a lot of really smart programmers have shown that object oriented technology is applicable to nearly all aspects of software development, and that it can result in radically better software, and a radically better process of developing it. On the other hand, just because something is OOPy doesn't make it smart, efficient, maintainable, or even understandable, for that matter. There's no reason you can't create a mess of object spaghetti code as opaque and brittle as the worst procedural code you ever had to maintain.
There’s a strong parallel between the debate of the true meaning of the label “relational” in the 80’s Xbase wasn’t a “pure” relational product, of course, but the Xbase data model allowed you to do most everything right in building a normalized dataset. Of course, it also allowed you to do nearly everything wrong. And it offered nothing to help you tell the difference between what was right and what was wrong.
Now its the 90’s, and Visual FoxPro gives you just as much rope in OO as it did in terms of relational theory.
History is repeating itself with the emergence of OO into Visual FoxPro. In fact, none of the popular new OO environments offer any guidance as to the right and wrong ways of developing effective software.
Additionally, you are already aware of the immense language bloat that was inflicted on FoxPro to stretch it so it could work with windows, host cross-platform development, and generate GUI. Now there's an absolutely huge new set of commands and keywords to support OOP. This means that there are even more ways than ever to approach a solution to a given problem in Visual FoxPro.
The challenge is to get to the heart of the new technology, and adopt a style that delivers the most functionality and the cleanest maintainability with the smallest amount of development effort. Pretty simple. So lets get into it.
Redundancy is a huge cause of maintenance headaches. If you have only one instance of a piece of data, (a customer phone number, for example) there are only two states to worry about: either it's right or it's wrong. But if the number is stored in two places within the system, there are now at least two additional states to worry about: If the two numbers don't match, which one is right? As the number of copies of the data increases, the maintenance burden grows exponentially.
On the other side of the fence, non-redundancy gives you leverage -- when you change a product price in the product table, the new information should propagate to every screen, report, and calculation referencing that price. This is why data normalization delivers a benefit you can actually feel in your application work.
When working with data, we use tables, which are simple structures, to eliminate redundancy. But the liabilities of maintaining similar or near-similar logic is much worse than with data. Data is either right or wrong, but program logic has to be understood and tested, which can become darn near impossible when it gets complex.
To eliminate redundancy in program logic, we need a software framework that allows us to normalize program logic similar to the way we normalize data. Programs are more complex structures, and instead of tables we need a hierarchical set of blueprints to eliminate redundancy.
In the above schematic, we can place logic to enhance all dialogs in the most basic class, logic to enhance any one platform-specific case in the middle class, and logic dealing with one locale in the most specialized class.
Layering your application's functionality into a class hierarchy does for your code what data normalization does for your data. It allows you to reduce to a single instance each kernel of logic within your application. There's a single correct place to hook each piece of enhancement logic in a properly designed class hierarchy. (Just as there's a single correct place to insert a piece of data in a properly normalized database design.)
Just like data normalization, the process of class design may seem somewhat abstract from the problem at hand. But huge real-world benefits accrue to the project when your class design supports the specialization required by your application.
Inheritance is the most dramatic new feature of an object oriented environment, but it's not the only one by any means. Of equivalent impact and significance is the ability to construct complex objects by assembling other simpler objects inside containers. The creation of composite objects can be either visual, as in forms that contain grids, button groups, array properties, and so on, or non-visual. For instance, a “deal” may contain numerous Purchase Order objects, each of which contains numerous line items. Each Purchase Order may contain numerous Invoice objects, representing invoices which fulfill the purchase order, each of which contains numerous line items, and so forth.
When maintaining and enhancing an existing application, inheritance provides a remarkably useful mechanism for incrementally modifying the existing functionality at a very fine level of detail without the risk of breaking the application in production.
When designing new systems from scratch, however, you have the option to create functionality by inheritance or by containership. There is no right answer for all scenarios. To come up with an optimal design for a given problem, you will have to look at the totality of the implementation and the downstream maintenance implications.
One piece of advice from one savvy group of students of OO suggests that use of inheritance hierarchies is often overdone, at the expense of more flexible downstream maintenance. There are several problems with long inheritance hierarchies, including:
Sluggish Performance. There's a lot of CPU cycles required to assemble the properties and methods required for a runtime instance of a heavily subclassed object. This computation is performed each time an object from the subclass is instantiated.
Obscuring the Clarity of Design. When you're looking at a class that is derived from a chain of parent classes more than five levels or so deep, it becomes really hard to tell what level a given behavior is controlled from. You spend an inordinate amount of time searching up the chain, with explicit “scope resolution operator”-adorned calls sprinkled around to keep things interesting. It an quickly look like spaghetti, even if it isn't.
Lack of Flexibility. As a class hierarchy becomes larger, its tougher to get the benefit of a piece of functionality embedded deep inside it. The more you think about it, the more your guts say “This isn't the right direction. There's got to be a simpler way."
In fact, assembling functionality by adding small, specialized objects to form composites that give powerful and flexible functionality appears to provide a superior result. Each time you spot a degree of flexibility that may broadly impact the application downstream, think about an intermediary object class which can elegantly encapsulate the functionality while limiting the impact on the rest of the code.
Adding one object into another isn't the only way to create functionality from small specialized classes. Runtime instances can simply cooperate with one another, and merely need to know about each other's existence in order to get some work done.
The conclusion here is to consider the design alternatives carefully. There are plenty, and the developer who forms a strong opinion too early may be his own worst enemy in achieving an optimal design to fit a given problem.
There are two different techniques for building composite objects. You can simply define a custom property, and assign an object handle to it using CREATEOBJECT(). Or, if the master object is a container, you can execute an ADDOBJECT() method at runtime or an ADD OBJECT command at design time. The difference is that when you use either ADD OBJECT or ADDOBJECT(), the object becomes a member of the container and can use the Parent property to refer to the immediate container object.
When using CREATEOBJECT on an instance variable of an object, that property is simply being used as a reference to the instance of another object. It does not “contain” the new object. Therefore the PARENT property does not point to the “would-be container”, for in fact the object is not part of any container.
Either way, when you add or combine dissimilar objects together to create more specialized functionality, it is called composition.
NOTE: Objects can be assembled via composition either visually or programmatically.
The form design objects you get with Visual FoxPro provide a basic set of tools for building common interface components. However, there are several other types of classes you'll need to build from time to time, or encounter in other developers' VFP work.
As discussed above -- gives developers domain over all components in the application from a single point of control.
Groups of Interface Objects
Advantages: If you add an additional control to a group, it propogates everywhere the group is used. (Presuming there's space for it to appear.)
Disadvantage: To suppress one of a group of controls, you have to hide it (VISIBLE = .F.) and deal with the vacant space (perhaps by moving the other objects around)
Handle “traffic direction” and “inventory control” within the app
Examples:
Purchase Order vs Purchase Order Form
If there is substantial business processing in the application, or the possibility to connect to other applications in a workflow environment, there may be wisdom in separating certain business objects from their visual representation. In other words, a Purchase Order may in fact be a business entity distinct from a Purchase Order User Interface.
A PO object may well be of a lot of use in an information system without a form. It may need to know how to print itself, post itself, recalculate itself, validate itself, and so forth. Invoice objects may need to consult with it as part of the process of determining their own validity. It is possible that any of these processes may be executed while a PO has a visible representation in the environment, however it is equally easy to see how the PO object may be useful in processes where the visual representation (form) is not appropriate or desired.
In simple applications, this distinction may be academic, but in more ambitious business processing, it may become essential. In these cases, agglomerating all the navigation, data I/O, security, calculation, etc., into the form representing the business object may result in a confusing overload. For instance, why endure all the overhead of creating 100 forms when you want to batch up a group of PO's and route them for processing?
Respond intelligently to errors reported from various aspects of system
There are two published techniques for developing designs for object oriented software: CRC cards and use cases. While books have been written about both of these methodologies, even a very basic overview can help you get started in leveraging the work of objects you build into your application.
CRC design follows a few basic steps:
Look at the design from a variety of angles. Examine and reexamine the relationships and responsibilities. Look for tangled logic and try to decompose it into a simpler construct. Don't worry about physical storage at this point, and resist the urge to mold the problem into an entity relationship sketch of the data storage model.
NOTE: There are software packages that can be used to assist in this process. Check out Rational Rose. Converting its output into actual VFP classes is a fairly small task.
Use cases are hypothetical business situations against which the design is tested. Each use case serves as a scenario, against which the current version of the object design is applied. As application developers, we know that exceptions take up a disproportionate amount of the development effort in any project. Use cases are basically a technique to identify the exceptions that, if not handled, blow a hole in the design. One thing for sure: just as a developer is the worst person to test a "completed" application, developers will be unlikely to come up with use cases that break the current design model. Much more fruitful use cases will emerge from knowledgeable users with experience in the business rules and exceptions that arise in their daily work.
Taken together, CRC's and Use Cases represent a pair of approaches that remind us of the top-down vs bottom-up styles of classical analysis and design. The top-down approach reveals the structure of the major components of the design, while the bottom-up view serves as a cross-check that the design is sufficient to handle the exceptions. Trying both techniques alternately on a small project in a small team effort can be very illuminating.
Turns out a lot of smart programmers have already done some pretty deep analysis of the results of a lot of different types of OO development and abstracted the parts of the process that were common to them. They have developed an entire meta-language describing a variety of architectural components that appear over and over again in good OO solutions, and classified them into groups. The result is a hot topic in OO development: design patterns. This classification mechanism provides a language to help us get to the theoretical basis of object design, as well as share descriptions of our solutions that cut across specific language implementation.
Its appealingly easy to decide that, because there's no sense in reinventing the wheel, the no-brainer decision is to buy an application framework that will, no doubt, give you all the functionality you need and then some. That's not a bad idea, but there are some caveats to consider:
Turns out that building a set of classes is the easy part. Documenting the library, maintaining it, teaching its proper use and reuse to a team, and fitting it to the problem your application is supposed to solve, and folding all that into a development strategy that actually achieves reuse are the really time-consuming parts. A class library understood by only a single developer won't achieve much benefit in the long run.
Building Your Own Framework
Pro’s | Con’s |
---|---|
You know everything that’s in it and how it works. |
You’re always struggling with how much effort to invest in the base vs the current project |
Its fit to requirements is enforced by your application requirements and goals |
You get cornered into build for the foundation without any assurance that a particular piece will ever “pay off” in reuse |
You’re more likely to be able to modify it without breaking it. |
You solve a lot of problems that other developers have also solved – some who know more and some who have invested more than you in the solution. |
Buying a Framework
Pro’s | Con’s |
---|---|
You get a lot more than you could build for yourself for the same money. |
There’s no shortcut to learning nearly everything about the bought foundation. Often its harder to learn someone else’s methodology than write one’s own. |
Its tested (hopefully) on a set of applications broader and deeper than yours. |
No guarantee of fit to your requirements. Having invested heavily in learning a foundation, there’s a tendency to try to apply it to all problems. However, if the architecture is too heavy, too light, or otherwise off target, you’ll be hobbled by the burden imposed by the misfit – you’d have been better off building from scratch. |
You can learn by seeing how someone else solved a particular problem. |
There may be technical flaws or shortcomings in the acquired work. Fixing them introduces a new challenge – maintaining and modifying the work in such a way that updates from the source won’t collide. Version management becomes an issue. |
How big and complex is the type of application you're required to build? What type of application was the target for the foundation you wish to buy? How closely do these two match? How do you go about even learning how to answer the question?
One advantage to building a simple base foundation yourself is that you'll know how it works, not only in your head somewhere, but in your fingers. In other words, you won't be straining against what you already have when you go to implement it.
Advice: Buy functionality, not just architecture.
It’s fairly easy to lay out an impressive-looking set of “skeleton” classes which support someone’s theory of “how apps ought to be designed”. However, if all the real nitty-gritty functionality is “left as an exercise for the buyer” the real value of the work is not very high. You may get good ideas from the structural design, but it won’t be proved out until the functionality is fairly complete.
When all the dust settles about our fascination with this new technology, the core issues will emerge. At the top of the list is the challenge of appropriate scaling to the application at hand. Because of its pervasive support for reusability, object architecture invites you to build tools. The Visual FoxPro environment goes one step further and invites the building of tools for building tools. This process can regress ad infinitum. Put another way, it's possible to spawn task after task of building classes and tools, while losing sight of the actual project that's paying your rent.
If you launch a nine month design process for a two week development effort, you're going to be in trouble. The same is true for two weeks of R&D for a nine month project. Many of us will be evangelizing the use of Visual FoxPro within organizations that perceive VFP to be a controversial choice, to say the least. What will be the evaluation of developer productivity if a project is overbuilt to ludicrous proportions?
This challenge is not going to be solved by a few “experts” espousing opinions about what architecture is “best” -- far removed from the context of the specific application you're building. Its going to be solved in the trenches by real-world developers doing what they do best -- applying the 80 - 20 rule -- the common-sense point of view that says “do the 20% of the work that yields 80% of the benefit. Clearly you can run into trouble by over-design and over-tooling as much as cutting corners.
What kind of reality check can you provide to keep you focused on the most productive “sweet spot” ?
You've got an inheritance hierarchy running more than five to seven levels deep.
You've got intermediate subclasses that hold just a few lines of code or define one or two custom properties.
You're creating subclasses to allow for future capabilities that are totally out of the realm of reality.
The cure? As a reality check, try a walk through of your design with a sympathetic but relatively non-technical manager or power user. Try to explain the justification for each class, and key methods. The litmus test is not solely whether you convince them that your opinion is right. If you can listen to your own arguments as you elucidate them, and you've got the guts to admit it, it's equally likely that you'll decide that your own reasons for or against a specific design idea require revision.
The power and neatness of the tools invite you to tinker with “cool stuff” all along the way. To be of the highest service to your customers and the system's ultimate users, try to focus your most intense development effort on the core business issues.
Don't kill the project building the tools
Abstraction can be an infinite regress
The importance of the 80-20 rule
It's imperative to focus your development effort on the area where you'll get the most bang for the buck. If your application contains 40 master-detail forms, it would be best to delve most deeply into class construction in that area. Leave the 256 color animated therm-bar for another project.
When we look at the broad discipline of application development, it seems that an infinite amount of efficiency can be achieved in construction techniques (generally programming), and still the underlying problem isn’t made that much easier. At its core, the discipline of application development is still composed of a number of steps:
In all of these activities, the essence of the developer’s discipline is to ingest details as more and more is learned about the actual requirements of the system. But where do these details go?
It seems that inevitably, many of them wind up in the head of the developer. At the point the system is implemented, the developer often knows more about the customer’s business requirements than the customer himself. But with the passage of time, that knowledge dissipates.
The OOP process by itself doesn’t address the disparity between the real problem, the requirements document, the software, and the documentation and help systems. Consider what your foundation contributes to the goal of unifying the collection of details into a single, unified repository that supports consistency from design document to user documentation.
Object technology opens up dozens of new ways of constructing applications. Where a problem may have prompted a handful of procedural solutions, there will be dozens of OO approaches. OO technology allows us to abstract problems to a much higher level. When this serves the needs of application development, OO can be the force that powers far more sophisticated software development, and far more maintainable outcomes than we ever dreamed was possible in procedural code.
OO is also much less language dependent than procedural languages. We observed how FoxPro 2.x development (with GENSCRNX, drivers, etc.) moved farther and farther from a common “xbase language fluency” into a kind of specialized expertise that couldn’t be shared outside the circle of “Fox-heads”. On the other hand, OO concepts map from one implementation to another in a very straightforward way. Classes and instances, and the concepts that organize them, can be moved from one language/platform to another much more easily. Look for great design ideas to appear from many directions.
The distractions will also be enormous. The challenge to scale design to the problem at hand will be ever-present. It appears that OO design, if complete enough, essentially becomes 80% of the development effort. However, there’s the risk that once coding starts, the other 80% of the job will inexplicably appear. The absence of a clear set of guidelines and professionally accepted methodologies for pursuing OO design and development invites the possibility of projects that are designed interminably, and killed before they are ever implemented.
If this session leaves more questions than answers, I’ll be satisfied. A very open mind, however uncomfortable it makes us, is a very healthy state of mind for pursuing solutions using OO.